Test Report: Docker_Linux_crio 22089

                    
                      334c0a8a01ce6327cc86bd51efb70eb94afee1a0:2025-12-10:42712
                    
                

Test fail (28/415)

Order failed test Duration
38 TestAddons/serial/Volcano 0.25
44 TestAddons/parallel/Registry 15.71
45 TestAddons/parallel/RegistryCreds 0.41
46 TestAddons/parallel/Ingress 147.49
47 TestAddons/parallel/InspektorGadget 6.25
48 TestAddons/parallel/MetricsServer 5.33
50 TestAddons/parallel/CSI 44.97
51 TestAddons/parallel/Headlamp 2.53
52 TestAddons/parallel/CloudSpanner 5.28
53 TestAddons/parallel/LocalPath 8.16
54 TestAddons/parallel/NvidiaDevicePlugin 5.27
55 TestAddons/parallel/Yakd 5.28
56 TestAddons/parallel/AmdGpuDevicePlugin 5.27
135 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 5.41
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 4.56
294 TestJSONOutput/pause/Command 2.1
300 TestJSONOutput/unpause/Command 1.76
405 TestPause/serial/Pause 6.04
451 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 3
457 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.23
458 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.28
462 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.08
473 TestStartStop/group/old-k8s-version/serial/Pause 6.18
480 TestStartStop/group/no-preload/serial/Pause 6.52
483 TestStartStop/group/embed-certs/serial/Pause 5.91
486 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.2
492 TestStartStop/group/default-k8s-diff-port/serial/Pause 6.49
496 TestStartStop/group/newest-cni/serial/Pause 5.98
x
+
TestAddons/serial/Volcano (0.25s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:852: skipping: crio not supported
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-028052 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-028052 addons disable volcano --alsologtostderr -v=1: exit status 11 (252.95812ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 05:45:55.523369   22012 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:45:55.523705   22012 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:45:55.523717   22012 out.go:374] Setting ErrFile to fd 2...
	I1210 05:45:55.523724   22012 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:45:55.523924   22012 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8832/.minikube/bin
	I1210 05:45:55.524206   22012 mustload.go:66] Loading cluster: addons-028052
	I1210 05:45:55.524577   22012 config.go:182] Loaded profile config "addons-028052": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 05:45:55.524599   22012 addons.go:622] checking whether the cluster is paused
	I1210 05:45:55.524702   22012 config.go:182] Loaded profile config "addons-028052": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 05:45:55.524719   22012 host.go:66] Checking if "addons-028052" exists ...
	I1210 05:45:55.525194   22012 cli_runner.go:164] Run: docker container inspect addons-028052 --format={{.State.Status}}
	I1210 05:45:55.543926   22012 ssh_runner.go:195] Run: systemctl --version
	I1210 05:45:55.543976   22012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028052
	I1210 05:45:55.561997   22012 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/addons-028052/id_rsa Username:docker}
	I1210 05:45:55.655383   22012 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 05:45:55.655448   22012 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 05:45:55.689411   22012 cri.go:89] found id: "16d883ea0cc6779bde20ede57329324ccb3073fc4a4ace9d329105b630097e53"
	I1210 05:45:55.689435   22012 cri.go:89] found id: "736d6c57ec43c1049fc475cb75d66bd4e61af0f5fa34e42b665c70ba4390742c"
	I1210 05:45:55.689441   22012 cri.go:89] found id: "660e106c0ca888f87a50643d5adcd0d1151065c4341897cf2b65f1c18534f68f"
	I1210 05:45:55.689446   22012 cri.go:89] found id: "b77860e4ca7d8d9c02bcbed331e0cbb22323bb93c694b8969dae5e3caf82308b"
	I1210 05:45:55.689451   22012 cri.go:89] found id: "15bdf91e471254f93dee370bf1831f3912afc00e05382ad11815cbbab8f2e1d7"
	I1210 05:45:55.689463   22012 cri.go:89] found id: "b348e5c8e523a1f9eebbeccbb1a381248fcc876c68527ef07c501b958acbec62"
	I1210 05:45:55.689481   22012 cri.go:89] found id: "03c1319ba40adc6cc0c4630b22ba6b75c7514ebc2d7cf02eb7505833be94d7a7"
	I1210 05:45:55.689485   22012 cri.go:89] found id: "30e7ebcfff0650bcc7fdafd943ccd6f50a351909e0b9c33643660cfe8a925bfb"
	I1210 05:45:55.689491   22012 cri.go:89] found id: "1f872b473fd2ae84699c713f2ef8f124fd4fcdd418efbb37106de31bf37f116e"
	I1210 05:45:55.689514   22012 cri.go:89] found id: "304fa9c779484e5496a401ac38622fc781398b5378ffc456e3864b3d0825f120"
	I1210 05:45:55.689521   22012 cri.go:89] found id: "3d4ccc4d76ae4b3a4f2c820c2802b0218844b053079f83f8844177ffea9582be"
	I1210 05:45:55.689526   22012 cri.go:89] found id: "a0bbf399c11456bf767be1edadfa4ce06f450d80bdb74a4ff140d1658684ba30"
	I1210 05:45:55.689534   22012 cri.go:89] found id: "5f58fcc00134eb8d59a63529213019f5e50939e6fd4c584d6eff14ac2a6144e9"
	I1210 05:45:55.689539   22012 cri.go:89] found id: "dec533b105023287d9c5a2f8b2c9416ba56dda3bfc1421a5f53aab1805cf96be"
	I1210 05:45:55.689544   22012 cri.go:89] found id: "7c725f36dd3b4433100a50a43edc6ec082420363ce394e1342d7a178ca2f3ee5"
	I1210 05:45:55.689555   22012 cri.go:89] found id: "6ed5ed25f8d19e3ab10979fe0d41f814698164a6644627db3849c6e9209352d6"
	I1210 05:45:55.689560   22012 cri.go:89] found id: "9d1fa5291d10e03a9903b7e6298d010ed5ca423741104638ae3883dcb6a99dce"
	I1210 05:45:55.689566   22012 cri.go:89] found id: "58125e9bcfadd161d0334430d2e81b4b585bb9e189e3a652088e6fdbc00cdb98"
	I1210 05:45:55.689571   22012 cri.go:89] found id: "fbc11ef328020e6f9cbad908c90e044d4bb674441630aabf78830e7d07ac1671"
	I1210 05:45:55.689575   22012 cri.go:89] found id: "9497319e6c1c192902153d2ab92d489d5b12e5477a82f9c3e5dc7a7cb90e690d"
	I1210 05:45:55.689580   22012 cri.go:89] found id: "0122c6e10b651e471c57d0ec13f92f8bc142cb60e5d24dfbe157c9afb9176abb"
	I1210 05:45:55.689590   22012 cri.go:89] found id: "f1f5e9bce84f7b19972c44f0a37d275e958d15c03c9fc7f5cafd80b0328b7b15"
	I1210 05:45:55.689595   22012 cri.go:89] found id: "65e519df51c1d064d81c14c81e4eb34dfaf950890b576594d1ed96430518937a"
	I1210 05:45:55.689605   22012 cri.go:89] found id: "965d086a638c9808f443b112af7fab37ce3c8230ef95960da97133283a174896"
	I1210 05:45:55.689609   22012 cri.go:89] found id: ""
	I1210 05:45:55.689661   22012 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 05:45:55.704905   22012 out.go:203] 
	W1210 05:45:55.706133   22012 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T05:45:55Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T05:45:55Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 05:45:55.706153   22012 out.go:285] * 
	* 
	W1210 05:45:55.709236   22012 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 05:45:55.710730   22012 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volcano addon: args "out/minikube-linux-amd64 -p addons-028052 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.25s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.71s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 2.689944ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-6b586f9694-6cvjm" [f3e1613c-59b0-4d4e-9529-8f5b529027bb] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003821785s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-kql6j" [82a3b310-71ed-4198-bba0-7ceeccfcaac0] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003691645s
addons_test.go:394: (dbg) Run:  kubectl --context addons-028052 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-028052 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-028052 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.216582851s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-amd64 -p addons-028052 ip
2025/12/10 05:46:18 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-028052 addons disable registry --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-028052 addons disable registry --alsologtostderr -v=1: exit status 11 (259.404312ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 05:46:19.041770   24951 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:46:19.042133   24951 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:46:19.042151   24951 out.go:374] Setting ErrFile to fd 2...
	I1210 05:46:19.042157   24951 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:46:19.042546   24951 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8832/.minikube/bin
	I1210 05:46:19.042860   24951 mustload.go:66] Loading cluster: addons-028052
	I1210 05:46:19.043196   24951 config.go:182] Loaded profile config "addons-028052": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 05:46:19.043225   24951 addons.go:622] checking whether the cluster is paused
	I1210 05:46:19.043327   24951 config.go:182] Loaded profile config "addons-028052": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 05:46:19.043341   24951 host.go:66] Checking if "addons-028052" exists ...
	I1210 05:46:19.043771   24951 cli_runner.go:164] Run: docker container inspect addons-028052 --format={{.State.Status}}
	I1210 05:46:19.062420   24951 ssh_runner.go:195] Run: systemctl --version
	I1210 05:46:19.062485   24951 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028052
	I1210 05:46:19.080803   24951 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/addons-028052/id_rsa Username:docker}
	I1210 05:46:19.179335   24951 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 05:46:19.179427   24951 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 05:46:19.215348   24951 cri.go:89] found id: "16d883ea0cc6779bde20ede57329324ccb3073fc4a4ace9d329105b630097e53"
	I1210 05:46:19.215384   24951 cri.go:89] found id: "736d6c57ec43c1049fc475cb75d66bd4e61af0f5fa34e42b665c70ba4390742c"
	I1210 05:46:19.215390   24951 cri.go:89] found id: "660e106c0ca888f87a50643d5adcd0d1151065c4341897cf2b65f1c18534f68f"
	I1210 05:46:19.215405   24951 cri.go:89] found id: "b77860e4ca7d8d9c02bcbed331e0cbb22323bb93c694b8969dae5e3caf82308b"
	I1210 05:46:19.215410   24951 cri.go:89] found id: "15bdf91e471254f93dee370bf1831f3912afc00e05382ad11815cbbab8f2e1d7"
	I1210 05:46:19.215416   24951 cri.go:89] found id: "b348e5c8e523a1f9eebbeccbb1a381248fcc876c68527ef07c501b958acbec62"
	I1210 05:46:19.215421   24951 cri.go:89] found id: "03c1319ba40adc6cc0c4630b22ba6b75c7514ebc2d7cf02eb7505833be94d7a7"
	I1210 05:46:19.215426   24951 cri.go:89] found id: "30e7ebcfff0650bcc7fdafd943ccd6f50a351909e0b9c33643660cfe8a925bfb"
	I1210 05:46:19.215431   24951 cri.go:89] found id: "1f872b473fd2ae84699c713f2ef8f124fd4fcdd418efbb37106de31bf37f116e"
	I1210 05:46:19.215440   24951 cri.go:89] found id: "304fa9c779484e5496a401ac38622fc781398b5378ffc456e3864b3d0825f120"
	I1210 05:46:19.215445   24951 cri.go:89] found id: "3d4ccc4d76ae4b3a4f2c820c2802b0218844b053079f83f8844177ffea9582be"
	I1210 05:46:19.215450   24951 cri.go:89] found id: "a0bbf399c11456bf767be1edadfa4ce06f450d80bdb74a4ff140d1658684ba30"
	I1210 05:46:19.215455   24951 cri.go:89] found id: "5f58fcc00134eb8d59a63529213019f5e50939e6fd4c584d6eff14ac2a6144e9"
	I1210 05:46:19.215460   24951 cri.go:89] found id: "dec533b105023287d9c5a2f8b2c9416ba56dda3bfc1421a5f53aab1805cf96be"
	I1210 05:46:19.215464   24951 cri.go:89] found id: "7c725f36dd3b4433100a50a43edc6ec082420363ce394e1342d7a178ca2f3ee5"
	I1210 05:46:19.215484   24951 cri.go:89] found id: "6ed5ed25f8d19e3ab10979fe0d41f814698164a6644627db3849c6e9209352d6"
	I1210 05:46:19.215489   24951 cri.go:89] found id: "9d1fa5291d10e03a9903b7e6298d010ed5ca423741104638ae3883dcb6a99dce"
	I1210 05:46:19.215495   24951 cri.go:89] found id: "58125e9bcfadd161d0334430d2e81b4b585bb9e189e3a652088e6fdbc00cdb98"
	I1210 05:46:19.215500   24951 cri.go:89] found id: "fbc11ef328020e6f9cbad908c90e044d4bb674441630aabf78830e7d07ac1671"
	I1210 05:46:19.215504   24951 cri.go:89] found id: "9497319e6c1c192902153d2ab92d489d5b12e5477a82f9c3e5dc7a7cb90e690d"
	I1210 05:46:19.215513   24951 cri.go:89] found id: "0122c6e10b651e471c57d0ec13f92f8bc142cb60e5d24dfbe157c9afb9176abb"
	I1210 05:46:19.215524   24951 cri.go:89] found id: "f1f5e9bce84f7b19972c44f0a37d275e958d15c03c9fc7f5cafd80b0328b7b15"
	I1210 05:46:19.215528   24951 cri.go:89] found id: "65e519df51c1d064d81c14c81e4eb34dfaf950890b576594d1ed96430518937a"
	I1210 05:46:19.215534   24951 cri.go:89] found id: "965d086a638c9808f443b112af7fab37ce3c8230ef95960da97133283a174896"
	I1210 05:46:19.215539   24951 cri.go:89] found id: ""
	I1210 05:46:19.215592   24951 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 05:46:19.234579   24951 out.go:203] 
	W1210 05:46:19.235942   24951 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T05:46:19Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T05:46:19Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 05:46:19.235960   24951 out.go:285] * 
	* 
	W1210 05:46:19.238945   24951 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 05:46:19.240381   24951 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry addon: args "out/minikube-linux-amd64 -p addons-028052 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (15.71s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.41s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 2.458889ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-028052
addons_test.go:334: (dbg) Run:  kubectl --context addons-028052 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-028052 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-028052 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (246.416938ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 05:46:19.630869   25111 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:46:19.630995   25111 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:46:19.631004   25111 out.go:374] Setting ErrFile to fd 2...
	I1210 05:46:19.631008   25111 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:46:19.631225   25111 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8832/.minikube/bin
	I1210 05:46:19.631480   25111 mustload.go:66] Loading cluster: addons-028052
	I1210 05:46:19.631795   25111 config.go:182] Loaded profile config "addons-028052": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 05:46:19.631813   25111 addons.go:622] checking whether the cluster is paused
	I1210 05:46:19.631888   25111 config.go:182] Loaded profile config "addons-028052": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 05:46:19.631900   25111 host.go:66] Checking if "addons-028052" exists ...
	I1210 05:46:19.632272   25111 cli_runner.go:164] Run: docker container inspect addons-028052 --format={{.State.Status}}
	I1210 05:46:19.651732   25111 ssh_runner.go:195] Run: systemctl --version
	I1210 05:46:19.651803   25111 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028052
	I1210 05:46:19.670439   25111 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/addons-028052/id_rsa Username:docker}
	I1210 05:46:19.767104   25111 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 05:46:19.767185   25111 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 05:46:19.796384   25111 cri.go:89] found id: "16d883ea0cc6779bde20ede57329324ccb3073fc4a4ace9d329105b630097e53"
	I1210 05:46:19.796405   25111 cri.go:89] found id: "736d6c57ec43c1049fc475cb75d66bd4e61af0f5fa34e42b665c70ba4390742c"
	I1210 05:46:19.796412   25111 cri.go:89] found id: "660e106c0ca888f87a50643d5adcd0d1151065c4341897cf2b65f1c18534f68f"
	I1210 05:46:19.796417   25111 cri.go:89] found id: "b77860e4ca7d8d9c02bcbed331e0cbb22323bb93c694b8969dae5e3caf82308b"
	I1210 05:46:19.796430   25111 cri.go:89] found id: "15bdf91e471254f93dee370bf1831f3912afc00e05382ad11815cbbab8f2e1d7"
	I1210 05:46:19.796435   25111 cri.go:89] found id: "b348e5c8e523a1f9eebbeccbb1a381248fcc876c68527ef07c501b958acbec62"
	I1210 05:46:19.796439   25111 cri.go:89] found id: "03c1319ba40adc6cc0c4630b22ba6b75c7514ebc2d7cf02eb7505833be94d7a7"
	I1210 05:46:19.796442   25111 cri.go:89] found id: "30e7ebcfff0650bcc7fdafd943ccd6f50a351909e0b9c33643660cfe8a925bfb"
	I1210 05:46:19.796446   25111 cri.go:89] found id: "1f872b473fd2ae84699c713f2ef8f124fd4fcdd418efbb37106de31bf37f116e"
	I1210 05:46:19.796453   25111 cri.go:89] found id: "304fa9c779484e5496a401ac38622fc781398b5378ffc456e3864b3d0825f120"
	I1210 05:46:19.796458   25111 cri.go:89] found id: "3d4ccc4d76ae4b3a4f2c820c2802b0218844b053079f83f8844177ffea9582be"
	I1210 05:46:19.796462   25111 cri.go:89] found id: "a0bbf399c11456bf767be1edadfa4ce06f450d80bdb74a4ff140d1658684ba30"
	I1210 05:46:19.796480   25111 cri.go:89] found id: "5f58fcc00134eb8d59a63529213019f5e50939e6fd4c584d6eff14ac2a6144e9"
	I1210 05:46:19.796486   25111 cri.go:89] found id: "dec533b105023287d9c5a2f8b2c9416ba56dda3bfc1421a5f53aab1805cf96be"
	I1210 05:46:19.796491   25111 cri.go:89] found id: "7c725f36dd3b4433100a50a43edc6ec082420363ce394e1342d7a178ca2f3ee5"
	I1210 05:46:19.796516   25111 cri.go:89] found id: "6ed5ed25f8d19e3ab10979fe0d41f814698164a6644627db3849c6e9209352d6"
	I1210 05:46:19.796526   25111 cri.go:89] found id: "9d1fa5291d10e03a9903b7e6298d010ed5ca423741104638ae3883dcb6a99dce"
	I1210 05:46:19.796531   25111 cri.go:89] found id: "58125e9bcfadd161d0334430d2e81b4b585bb9e189e3a652088e6fdbc00cdb98"
	I1210 05:46:19.796535   25111 cri.go:89] found id: "fbc11ef328020e6f9cbad908c90e044d4bb674441630aabf78830e7d07ac1671"
	I1210 05:46:19.796540   25111 cri.go:89] found id: "9497319e6c1c192902153d2ab92d489d5b12e5477a82f9c3e5dc7a7cb90e690d"
	I1210 05:46:19.796548   25111 cri.go:89] found id: "0122c6e10b651e471c57d0ec13f92f8bc142cb60e5d24dfbe157c9afb9176abb"
	I1210 05:46:19.796553   25111 cri.go:89] found id: "f1f5e9bce84f7b19972c44f0a37d275e958d15c03c9fc7f5cafd80b0328b7b15"
	I1210 05:46:19.796558   25111 cri.go:89] found id: "65e519df51c1d064d81c14c81e4eb34dfaf950890b576594d1ed96430518937a"
	I1210 05:46:19.796565   25111 cri.go:89] found id: "965d086a638c9808f443b112af7fab37ce3c8230ef95960da97133283a174896"
	I1210 05:46:19.796570   25111 cri.go:89] found id: ""
	I1210 05:46:19.796628   25111 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 05:46:19.811647   25111 out.go:203] 
	W1210 05:46:19.813084   25111 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T05:46:19Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T05:46:19Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 05:46:19.813115   25111 out.go:285] * 
	* 
	W1210 05:46:19.816000   25111 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 05:46:19.817500   25111 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry-creds addon: args "out/minikube-linux-amd64 -p addons-028052 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.41s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (147.49s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-028052 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-028052 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-028052 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [dd8f0d3c-7d89-4854-b35b-0905e3b6ab04] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [dd8f0d3c-7d89-4854-b35b-0905e3b6ab04] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003534313s
I1210 05:46:23.685987   12374 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-amd64 -p addons-028052 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:266: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-028052 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m14.882542508s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:282: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:290: (dbg) Run:  kubectl --context addons-028052 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-amd64 -p addons-028052 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect addons-028052
helpers_test.go:244: (dbg) docker inspect addons-028052:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "51b0c35579b592289c345d8ecf2bb629cbb5fc06f4baff5c9b882e5b7ea9bbd9",
	        "Created": "2025-12-10T05:44:14.519997499Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 14780,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T05:44:14.558891632Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9dfcc37acf4d8ed51daae49d651516447e95ced4bb0b0783e8c53cb79a74f008",
	        "ResolvConfPath": "/var/lib/docker/containers/51b0c35579b592289c345d8ecf2bb629cbb5fc06f4baff5c9b882e5b7ea9bbd9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/51b0c35579b592289c345d8ecf2bb629cbb5fc06f4baff5c9b882e5b7ea9bbd9/hostname",
	        "HostsPath": "/var/lib/docker/containers/51b0c35579b592289c345d8ecf2bb629cbb5fc06f4baff5c9b882e5b7ea9bbd9/hosts",
	        "LogPath": "/var/lib/docker/containers/51b0c35579b592289c345d8ecf2bb629cbb5fc06f4baff5c9b882e5b7ea9bbd9/51b0c35579b592289c345d8ecf2bb629cbb5fc06f4baff5c9b882e5b7ea9bbd9-json.log",
	        "Name": "/addons-028052",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-028052:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-028052",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "51b0c35579b592289c345d8ecf2bb629cbb5fc06f4baff5c9b882e5b7ea9bbd9",
	                "LowerDir": "/var/lib/docker/overlay2/bb4e53c07f08ba91546f608e922d047f47e2a74e9c07537bd03be60ccaba69fd-init/diff:/var/lib/docker/overlay2/5745aee6e8b05b3a4cc4ad6aee891df9d6438d830895f70bd2a764a976802708/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bb4e53c07f08ba91546f608e922d047f47e2a74e9c07537bd03be60ccaba69fd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bb4e53c07f08ba91546f608e922d047f47e2a74e9c07537bd03be60ccaba69fd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bb4e53c07f08ba91546f608e922d047f47e2a74e9c07537bd03be60ccaba69fd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-028052",
	                "Source": "/var/lib/docker/volumes/addons-028052/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-028052",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-028052",
	                "name.minikube.sigs.k8s.io": "addons-028052",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "a01efb72ebd326b77a7a2234e30dcd1c0f417d585e4d21113af5e3c2887e6c71",
	            "SandboxKey": "/var/run/docker/netns/a01efb72ebd3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-028052": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "089346d8609a769b6c49d5b936da4aac05f656c055bbbab0774e86d789ca5e72",
	                    "EndpointID": "84daf152514265731a1962fd2a5fd4d62b9e4c80bf9da5222828cdc0d99b979b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "ca:f3:9b:36:83:0e",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-028052",
	                        "51b0c35579b5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-028052 -n addons-028052
helpers_test.go:253: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-028052 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-028052 logs -n 25: (1.153097412s)
helpers_test.go:261: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-740663 --alsologtostderr --binary-mirror http://127.0.0.1:43475 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-740663 │ jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │                     │
	│ delete  │ -p binary-mirror-740663                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-740663 │ jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ addons  │ disable dashboard -p addons-028052                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-028052        │ jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │                     │
	│ addons  │ enable dashboard -p addons-028052                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-028052        │ jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │                     │
	│ start   │ -p addons-028052 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-028052        │ jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:45 UTC │
	│ addons  │ addons-028052 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-028052        │ jenkins │ v1.37.0 │ 10 Dec 25 05:45 UTC │                     │
	│ addons  │ addons-028052 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-028052        │ jenkins │ v1.37.0 │ 10 Dec 25 05:46 UTC │                     │
	│ addons  │ enable headlamp -p addons-028052 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-028052        │ jenkins │ v1.37.0 │ 10 Dec 25 05:46 UTC │                     │
	│ addons  │ addons-028052 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-028052        │ jenkins │ v1.37.0 │ 10 Dec 25 05:46 UTC │                     │
	│ addons  │ addons-028052 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-028052        │ jenkins │ v1.37.0 │ 10 Dec 25 05:46 UTC │                     │
	│ addons  │ addons-028052 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-028052        │ jenkins │ v1.37.0 │ 10 Dec 25 05:46 UTC │                     │
	│ ssh     │ addons-028052 ssh cat /opt/local-path-provisioner/pvc-73b92f44-a60e-4168-b0e4-db2e6a8f021c_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-028052        │ jenkins │ v1.37.0 │ 10 Dec 25 05:46 UTC │ 10 Dec 25 05:46 UTC │
	│ addons  │ addons-028052 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-028052        │ jenkins │ v1.37.0 │ 10 Dec 25 05:46 UTC │                     │
	│ addons  │ addons-028052 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-028052        │ jenkins │ v1.37.0 │ 10 Dec 25 05:46 UTC │                     │
	│ addons  │ addons-028052 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-028052        │ jenkins │ v1.37.0 │ 10 Dec 25 05:46 UTC │                     │
	│ ip      │ addons-028052 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-028052        │ jenkins │ v1.37.0 │ 10 Dec 25 05:46 UTC │ 10 Dec 25 05:46 UTC │
	│ addons  │ addons-028052 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-028052        │ jenkins │ v1.37.0 │ 10 Dec 25 05:46 UTC │                     │
	│ addons  │ addons-028052 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-028052        │ jenkins │ v1.37.0 │ 10 Dec 25 05:46 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-028052                                                                                                                                                                                                                                                                                                                                                                                           │ addons-028052        │ jenkins │ v1.37.0 │ 10 Dec 25 05:46 UTC │ 10 Dec 25 05:46 UTC │
	│ addons  │ addons-028052 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-028052        │ jenkins │ v1.37.0 │ 10 Dec 25 05:46 UTC │                     │
	│ ssh     │ addons-028052 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-028052        │ jenkins │ v1.37.0 │ 10 Dec 25 05:46 UTC │                     │
	│ addons  │ addons-028052 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-028052        │ jenkins │ v1.37.0 │ 10 Dec 25 05:46 UTC │                     │
	│ addons  │ addons-028052 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-028052        │ jenkins │ v1.37.0 │ 10 Dec 25 05:46 UTC │                     │
	│ addons  │ addons-028052 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-028052        │ jenkins │ v1.37.0 │ 10 Dec 25 05:46 UTC │                     │
	│ ip      │ addons-028052 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-028052        │ jenkins │ v1.37.0 │ 10 Dec 25 05:48 UTC │ 10 Dec 25 05:48 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 05:43:50
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 05:43:50.706588   14122 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:43:50.706831   14122 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:43:50.706841   14122 out.go:374] Setting ErrFile to fd 2...
	I1210 05:43:50.706845   14122 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:43:50.707023   14122 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8832/.minikube/bin
	I1210 05:43:50.707563   14122 out.go:368] Setting JSON to false
	I1210 05:43:50.708410   14122 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1582,"bootTime":1765343849,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 05:43:50.708481   14122 start.go:143] virtualization: kvm guest
	I1210 05:43:50.710460   14122 out.go:179] * [addons-028052] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 05:43:50.711809   14122 notify.go:221] Checking for updates...
	I1210 05:43:50.711865   14122 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 05:43:50.713275   14122 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 05:43:50.714587   14122 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-8832/kubeconfig
	I1210 05:43:50.715684   14122 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-8832/.minikube
	I1210 05:43:50.716966   14122 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 05:43:50.718390   14122 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 05:43:50.720044   14122 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 05:43:50.744230   14122 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 05:43:50.744342   14122 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:43:50.798942   14122 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-10 05:43:50.789304194 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 05:43:50.799041   14122 docker.go:319] overlay module found
	I1210 05:43:50.801024   14122 out.go:179] * Using the docker driver based on user configuration
	I1210 05:43:50.802615   14122 start.go:309] selected driver: docker
	I1210 05:43:50.802633   14122 start.go:927] validating driver "docker" against <nil>
	I1210 05:43:50.802644   14122 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 05:43:50.803220   14122 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:43:50.857926   14122 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-10 05:43:50.848909586 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 05:43:50.858056   14122 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1210 05:43:50.858250   14122 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 05:43:50.860189   14122 out.go:179] * Using Docker driver with root privileges
	I1210 05:43:50.861302   14122 cni.go:84] Creating CNI manager for ""
	I1210 05:43:50.861379   14122 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 05:43:50.861395   14122 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1210 05:43:50.861488   14122 start.go:353] cluster config:
	{Name:addons-028052 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-028052 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1210 05:43:50.862868   14122 out.go:179] * Starting "addons-028052" primary control-plane node in "addons-028052" cluster
	I1210 05:43:50.863971   14122 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 05:43:50.865319   14122 out.go:179] * Pulling base image v0.0.48-1765319469-22089 ...
	I1210 05:43:50.866684   14122 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 05:43:50.866717   14122 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-8832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1210 05:43:50.866723   14122 cache.go:65] Caching tarball of preloaded images
	I1210 05:43:50.866793   14122 preload.go:238] Found /home/jenkins/minikube-integration/22089-8832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 05:43:50.866805   14122 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1210 05:43:50.866792   14122 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon
	I1210 05:43:50.867112   14122 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/config.json ...
	I1210 05:43:50.867148   14122 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/config.json: {Name:mke5bc82231a890fb8e87878b1217790859e5087 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:43:50.883930   14122 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca to local cache
	I1210 05:43:50.884050   14122 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local cache directory
	I1210 05:43:50.884072   14122 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local cache directory, skipping pull
	I1210 05:43:50.884076   14122 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca exists in cache, skipping pull
	I1210 05:43:50.884083   14122 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca as a tarball
	I1210 05:43:50.884090   14122 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca from local cache
	I1210 05:44:04.204967   14122 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca from cached tarball
	I1210 05:44:04.205004   14122 cache.go:243] Successfully downloaded all kic artifacts
	I1210 05:44:04.205052   14122 start.go:360] acquireMachinesLock for addons-028052: {Name:mkb82df074e71d49290a9286f326d6fa899e9ce1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:44:04.205187   14122 start.go:364] duration metric: took 96.029µs to acquireMachinesLock for "addons-028052"
	I1210 05:44:04.205218   14122 start.go:93] Provisioning new machine with config: &{Name:addons-028052 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-028052 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 05:44:04.205297   14122 start.go:125] createHost starting for "" (driver="docker")
	I1210 05:44:04.207452   14122 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1210 05:44:04.207712   14122 start.go:159] libmachine.API.Create for "addons-028052" (driver="docker")
	I1210 05:44:04.207747   14122 client.go:173] LocalClient.Create starting
	I1210 05:44:04.207846   14122 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem
	I1210 05:44:04.301087   14122 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/cert.pem
	I1210 05:44:04.419732   14122 cli_runner.go:164] Run: docker network inspect addons-028052 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 05:44:04.439773   14122 cli_runner.go:211] docker network inspect addons-028052 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 05:44:04.439844   14122 network_create.go:284] running [docker network inspect addons-028052] to gather additional debugging logs...
	I1210 05:44:04.439863   14122 cli_runner.go:164] Run: docker network inspect addons-028052
	W1210 05:44:04.456581   14122 cli_runner.go:211] docker network inspect addons-028052 returned with exit code 1
	I1210 05:44:04.456608   14122 network_create.go:287] error running [docker network inspect addons-028052]: docker network inspect addons-028052: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-028052 not found
	I1210 05:44:04.456620   14122 network_create.go:289] output of [docker network inspect addons-028052]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-028052 not found
	
	** /stderr **
	I1210 05:44:04.456780   14122 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 05:44:04.476061   14122 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001db53b0}
	I1210 05:44:04.476146   14122 network_create.go:124] attempt to create docker network addons-028052 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1210 05:44:04.476200   14122 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-028052 addons-028052
	I1210 05:44:04.524230   14122 network_create.go:108] docker network addons-028052 192.168.49.0/24 created
	I1210 05:44:04.524264   14122 kic.go:121] calculated static IP "192.168.49.2" for the "addons-028052" container
	I1210 05:44:04.524380   14122 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 05:44:04.540855   14122 cli_runner.go:164] Run: docker volume create addons-028052 --label name.minikube.sigs.k8s.io=addons-028052 --label created_by.minikube.sigs.k8s.io=true
	I1210 05:44:04.559161   14122 oci.go:103] Successfully created a docker volume addons-028052
	I1210 05:44:04.559284   14122 cli_runner.go:164] Run: docker run --rm --name addons-028052-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-028052 --entrypoint /usr/bin/test -v addons-028052:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca -d /var/lib
	I1210 05:44:10.554517   14122 cli_runner.go:217] Completed: docker run --rm --name addons-028052-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-028052 --entrypoint /usr/bin/test -v addons-028052:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca -d /var/lib: (5.995142286s)
	I1210 05:44:10.554553   14122 oci.go:107] Successfully prepared a docker volume addons-028052
	I1210 05:44:10.554613   14122 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 05:44:10.554628   14122 kic.go:194] Starting extracting preloaded images to volume ...
	I1210 05:44:10.554699   14122 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22089-8832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-028052:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca -I lz4 -xf /preloaded.tar -C /extractDir
	I1210 05:44:14.449019   14122 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22089-8832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-028052:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca -I lz4 -xf /preloaded.tar -C /extractDir: (3.894271889s)
	I1210 05:44:14.449074   14122 kic.go:203] duration metric: took 3.894421924s to extract preloaded images to volume ...
	W1210 05:44:14.449201   14122 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1210 05:44:14.449239   14122 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1210 05:44:14.449297   14122 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 05:44:14.503943   14122 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-028052 --name addons-028052 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-028052 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-028052 --network addons-028052 --ip 192.168.49.2 --volume addons-028052:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca
	I1210 05:44:14.812049   14122 cli_runner.go:164] Run: docker container inspect addons-028052 --format={{.State.Running}}
	I1210 05:44:14.831237   14122 cli_runner.go:164] Run: docker container inspect addons-028052 --format={{.State.Status}}
	I1210 05:44:14.850767   14122 cli_runner.go:164] Run: docker exec addons-028052 stat /var/lib/dpkg/alternatives/iptables
	I1210 05:44:14.902942   14122 oci.go:144] the created container "addons-028052" has a running status.
	I1210 05:44:14.902977   14122 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22089-8832/.minikube/machines/addons-028052/id_rsa...
	I1210 05:44:15.010784   14122 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22089-8832/.minikube/machines/addons-028052/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 05:44:15.036596   14122 cli_runner.go:164] Run: docker container inspect addons-028052 --format={{.State.Status}}
	I1210 05:44:15.057120   14122 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 05:44:15.057146   14122 kic_runner.go:114] Args: [docker exec --privileged addons-028052 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 05:44:15.101605   14122 cli_runner.go:164] Run: docker container inspect addons-028052 --format={{.State.Status}}
	I1210 05:44:15.126879   14122 machine.go:94] provisionDockerMachine start ...
	I1210 05:44:15.126987   14122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028052
	I1210 05:44:15.153954   14122 main.go:143] libmachine: Using SSH client type: native
	I1210 05:44:15.154227   14122 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1210 05:44:15.154239   14122 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 05:44:15.155495   14122 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52150->127.0.0.1:32768: read: connection reset by peer
	I1210 05:44:18.287700   14122 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-028052
	
	I1210 05:44:18.287728   14122 ubuntu.go:182] provisioning hostname "addons-028052"
	I1210 05:44:18.287800   14122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028052
	I1210 05:44:18.306546   14122 main.go:143] libmachine: Using SSH client type: native
	I1210 05:44:18.306751   14122 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1210 05:44:18.306763   14122 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-028052 && echo "addons-028052" | sudo tee /etc/hostname
	I1210 05:44:18.446275   14122 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-028052
	
	I1210 05:44:18.446355   14122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028052
	I1210 05:44:18.464350   14122 main.go:143] libmachine: Using SSH client type: native
	I1210 05:44:18.464591   14122 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1210 05:44:18.464611   14122 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-028052' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-028052/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-028052' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 05:44:18.595160   14122 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 05:44:18.595185   14122 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22089-8832/.minikube CaCertPath:/home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22089-8832/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22089-8832/.minikube}
	I1210 05:44:18.595215   14122 ubuntu.go:190] setting up certificates
	I1210 05:44:18.595226   14122 provision.go:84] configureAuth start
	I1210 05:44:18.595270   14122 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-028052
	I1210 05:44:18.613098   14122 provision.go:143] copyHostCerts
	I1210 05:44:18.613179   14122 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22089-8832/.minikube/ca.pem (1078 bytes)
	I1210 05:44:18.613284   14122 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22089-8832/.minikube/cert.pem (1123 bytes)
	I1210 05:44:18.613342   14122 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22089-8832/.minikube/key.pem (1675 bytes)
	I1210 05:44:18.613394   14122 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22089-8832/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca-key.pem org=jenkins.addons-028052 san=[127.0.0.1 192.168.49.2 addons-028052 localhost minikube]
	I1210 05:44:18.689603   14122 provision.go:177] copyRemoteCerts
	I1210 05:44:18.689656   14122 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 05:44:18.689688   14122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028052
	I1210 05:44:18.707578   14122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/addons-028052/id_rsa Username:docker}
	I1210 05:44:18.802904   14122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 05:44:18.822660   14122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1210 05:44:18.840066   14122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 05:44:18.857327   14122 provision.go:87] duration metric: took 262.081326ms to configureAuth
	I1210 05:44:18.857376   14122 ubuntu.go:206] setting minikube options for container-runtime
	I1210 05:44:18.857569   14122 config.go:182] Loaded profile config "addons-028052": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 05:44:18.857663   14122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028052
	I1210 05:44:18.875525   14122 main.go:143] libmachine: Using SSH client type: native
	I1210 05:44:18.875786   14122 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1210 05:44:18.875811   14122 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 05:44:19.141280   14122 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 05:44:19.141303   14122 machine.go:97] duration metric: took 4.014400193s to provisionDockerMachine
	I1210 05:44:19.141313   14122 client.go:176] duration metric: took 14.933561238s to LocalClient.Create
	I1210 05:44:19.141340   14122 start.go:167] duration metric: took 14.933628326s to libmachine.API.Create "addons-028052"
	I1210 05:44:19.141349   14122 start.go:293] postStartSetup for "addons-028052" (driver="docker")
	I1210 05:44:19.141366   14122 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 05:44:19.141420   14122 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 05:44:19.141464   14122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028052
	I1210 05:44:19.158974   14122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/addons-028052/id_rsa Username:docker}
	I1210 05:44:19.254990   14122 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 05:44:19.258741   14122 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 05:44:19.258770   14122 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 05:44:19.258783   14122 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-8832/.minikube/addons for local assets ...
	I1210 05:44:19.258864   14122 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-8832/.minikube/files for local assets ...
	I1210 05:44:19.258899   14122 start.go:296] duration metric: took 117.541727ms for postStartSetup
	I1210 05:44:19.259213   14122 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-028052
	I1210 05:44:19.278082   14122 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/config.json ...
	I1210 05:44:19.278363   14122 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 05:44:19.278404   14122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028052
	I1210 05:44:19.298857   14122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/addons-028052/id_rsa Username:docker}
	I1210 05:44:19.390711   14122 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 05:44:19.395180   14122 start.go:128] duration metric: took 15.189870457s to createHost
	I1210 05:44:19.395206   14122 start.go:83] releasing machines lock for "addons-028052", held for 15.190001233s
	I1210 05:44:19.395265   14122 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-028052
	I1210 05:44:19.413527   14122 ssh_runner.go:195] Run: cat /version.json
	I1210 05:44:19.413577   14122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028052
	I1210 05:44:19.413618   14122 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 05:44:19.413701   14122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028052
	I1210 05:44:19.433824   14122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/addons-028052/id_rsa Username:docker}
	I1210 05:44:19.434405   14122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/addons-028052/id_rsa Username:docker}
	I1210 05:44:19.578583   14122 ssh_runner.go:195] Run: systemctl --version
	I1210 05:44:19.585100   14122 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 05:44:19.617892   14122 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 05:44:19.622301   14122 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 05:44:19.622371   14122 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 05:44:19.648715   14122 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 05:44:19.648738   14122 start.go:496] detecting cgroup driver to use...
	I1210 05:44:19.648765   14122 detect.go:190] detected "systemd" cgroup driver on host os
	I1210 05:44:19.648805   14122 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 05:44:19.664796   14122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 05:44:19.677032   14122 docker.go:218] disabling cri-docker service (if available) ...
	I1210 05:44:19.677100   14122 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 05:44:19.693394   14122 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 05:44:19.710723   14122 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 05:44:19.792617   14122 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 05:44:19.879870   14122 docker.go:234] disabling docker service ...
	I1210 05:44:19.879924   14122 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 05:44:19.898125   14122 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 05:44:19.910823   14122 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 05:44:19.997295   14122 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 05:44:20.074688   14122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 05:44:20.086838   14122 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 05:44:20.100849   14122 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 05:44:20.100903   14122 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 05:44:20.111336   14122 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1210 05:44:20.111397   14122 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 05:44:20.120179   14122 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 05:44:20.128758   14122 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 05:44:20.137459   14122 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 05:44:20.145448   14122 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 05:44:20.153890   14122 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 05:44:20.167192   14122 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 05:44:20.176200   14122 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 05:44:20.183953   14122 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1210 05:44:20.184017   14122 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1210 05:44:20.195718   14122 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 05:44:20.203278   14122 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:44:20.282107   14122 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 05:44:20.415220   14122 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 05:44:20.415285   14122 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 05:44:20.419229   14122 start.go:564] Will wait 60s for crictl version
	I1210 05:44:20.419280   14122 ssh_runner.go:195] Run: which crictl
	I1210 05:44:20.422851   14122 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 05:44:20.447061   14122 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 05:44:20.447186   14122 ssh_runner.go:195] Run: crio --version
	I1210 05:44:20.474522   14122 ssh_runner.go:195] Run: crio --version
	I1210 05:44:20.504341   14122 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1210 05:44:20.505521   14122 cli_runner.go:164] Run: docker network inspect addons-028052 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 05:44:20.522134   14122 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1210 05:44:20.526149   14122 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 05:44:20.536187   14122 kubeadm.go:884] updating cluster {Name:addons-028052 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-028052 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 05:44:20.536354   14122 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 05:44:20.536415   14122 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 05:44:20.567747   14122 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 05:44:20.567774   14122 crio.go:433] Images already preloaded, skipping extraction
	I1210 05:44:20.567815   14122 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 05:44:20.591705   14122 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 05:44:20.591727   14122 cache_images.go:86] Images are preloaded, skipping loading
	I1210 05:44:20.591734   14122 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1210 05:44:20.591815   14122 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-028052 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-028052 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 05:44:20.591874   14122 ssh_runner.go:195] Run: crio config
	I1210 05:44:20.635335   14122 cni.go:84] Creating CNI manager for ""
	I1210 05:44:20.635374   14122 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 05:44:20.635394   14122 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 05:44:20.635423   14122 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-028052 NodeName:addons-028052 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 05:44:20.635573   14122 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-028052"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 05:44:20.635647   14122 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1210 05:44:20.643500   14122 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 05:44:20.643564   14122 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 05:44:20.651537   14122 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1210 05:44:20.664135   14122 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 05:44:20.679376   14122 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1210 05:44:20.692200   14122 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1210 05:44:20.695857   14122 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 05:44:20.705748   14122 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:44:20.780990   14122 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 05:44:20.802026   14122 certs.go:69] Setting up /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052 for IP: 192.168.49.2
	I1210 05:44:20.802056   14122 certs.go:195] generating shared ca certs ...
	I1210 05:44:20.802075   14122 certs.go:227] acquiring lock for ca certs: {Name:mkfe434cecfa5233603e8d01fb39a21abb4f8ca4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:44:20.802196   14122 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22089-8832/.minikube/ca.key
	I1210 05:44:20.996787   14122 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-8832/.minikube/ca.crt ...
	I1210 05:44:20.996813   14122 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/ca.crt: {Name:mk1d513f296e0364032ebd95d26dea0f51debf57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:44:20.997012   14122 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-8832/.minikube/ca.key ...
	I1210 05:44:20.997029   14122 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/ca.key: {Name:mkdc1abbf79f324d72d891c5908933fa5d660c1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:44:20.997137   14122 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22089-8832/.minikube/proxy-client-ca.key
	I1210 05:44:21.114674   14122 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-8832/.minikube/proxy-client-ca.crt ...
	I1210 05:44:21.114703   14122 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/proxy-client-ca.crt: {Name:mkcb5cd5e73a33b179e01ea7cc46ae79b5b0a262 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:44:21.114880   14122 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-8832/.minikube/proxy-client-ca.key ...
	I1210 05:44:21.114893   14122 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/proxy-client-ca.key: {Name:mk1fcd7b0fcf2b218fdac4ffa80e78d4d2cd94f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:44:21.114994   14122 certs.go:257] generating profile certs ...
	I1210 05:44:21.115062   14122 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/client.key
	I1210 05:44:21.115077   14122 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/client.crt with IP's: []
	I1210 05:44:21.176961   14122 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/client.crt ...
	I1210 05:44:21.176991   14122 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/client.crt: {Name:mk7ac122e2baddd3f3b72bcf1a161b95df7673ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:44:21.177180   14122 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/client.key ...
	I1210 05:44:21.177193   14122 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/client.key: {Name:mkb65a53a364e0186156725a039b5fd6404ac52f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:44:21.177293   14122 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/apiserver.key.30876919
	I1210 05:44:21.177315   14122 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/apiserver.crt.30876919 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1210 05:44:21.222410   14122 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/apiserver.crt.30876919 ...
	I1210 05:44:21.222436   14122 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/apiserver.crt.30876919: {Name:mk7a476904d9d0a865c736e7fa3b577ceb879c35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:44:21.222628   14122 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/apiserver.key.30876919 ...
	I1210 05:44:21.222645   14122 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/apiserver.key.30876919: {Name:mk20a2ae7675a7d4b1a9b68da172b57b8b6ee2c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:44:21.222747   14122 certs.go:382] copying /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/apiserver.crt.30876919 -> /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/apiserver.crt
	I1210 05:44:21.222845   14122 certs.go:386] copying /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/apiserver.key.30876919 -> /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/apiserver.key
	I1210 05:44:21.222901   14122 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/proxy-client.key
	I1210 05:44:21.222918   14122 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/proxy-client.crt with IP's: []
	I1210 05:44:21.273763   14122 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/proxy-client.crt ...
	I1210 05:44:21.273791   14122 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/proxy-client.crt: {Name:mk53bf2b7a1ecbef55230cbac25da73eee95b050 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:44:21.273971   14122 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/proxy-client.key ...
	I1210 05:44:21.273987   14122 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/proxy-client.key: {Name:mkc842dbb4298c960e5236d1e4c6081c60234adc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:44:21.274180   14122 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca-key.pem (1679 bytes)
	I1210 05:44:21.274215   14122 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem (1078 bytes)
	I1210 05:44:21.274240   14122 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/cert.pem (1123 bytes)
	I1210 05:44:21.274262   14122 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/key.pem (1675 bytes)
	I1210 05:44:21.274867   14122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 05:44:21.292972   14122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 05:44:21.311017   14122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 05:44:21.329010   14122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 05:44:21.346674   14122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1210 05:44:21.364311   14122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 05:44:21.381735   14122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 05:44:21.399020   14122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 05:44:21.417304   14122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 05:44:21.436288   14122 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 05:44:21.449088   14122 ssh_runner.go:195] Run: openssl version
	I1210 05:44:21.455224   14122 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:44:21.462703   14122 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 05:44:21.472792   14122 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:44:21.476794   14122 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:44:21.476846   14122 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:44:21.510493   14122 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 05:44:21.518858   14122 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 05:44:21.527079   14122 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 05:44:21.531225   14122 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 05:44:21.531283   14122 kubeadm.go:401] StartCluster: {Name:addons-028052 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-028052 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:44:21.531365   14122 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 05:44:21.531439   14122 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 05:44:21.559720   14122 cri.go:89] found id: ""
	I1210 05:44:21.559786   14122 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 05:44:21.567906   14122 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 05:44:21.575565   14122 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 05:44:21.575626   14122 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 05:44:21.583217   14122 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 05:44:21.583235   14122 kubeadm.go:158] found existing configuration files:
	
	I1210 05:44:21.583282   14122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 05:44:21.590827   14122 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 05:44:21.590887   14122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 05:44:21.598178   14122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 05:44:21.605715   14122 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 05:44:21.605775   14122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 05:44:21.613072   14122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 05:44:21.620285   14122 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 05:44:21.620359   14122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 05:44:21.627898   14122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 05:44:21.635585   14122 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 05:44:21.635641   14122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 05:44:21.642808   14122 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 05:44:21.680201   14122 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1210 05:44:21.680290   14122 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 05:44:21.700525   14122 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 05:44:21.700588   14122 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1210 05:44:21.700618   14122 kubeadm.go:319] OS: Linux
	I1210 05:44:21.700669   14122 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 05:44:21.700733   14122 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 05:44:21.700830   14122 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 05:44:21.700922   14122 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 05:44:21.700994   14122 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 05:44:21.701076   14122 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 05:44:21.701165   14122 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 05:44:21.701256   14122 kubeadm.go:319] CGROUPS_IO: enabled
	I1210 05:44:21.755638   14122 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 05:44:21.755818   14122 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 05:44:21.755959   14122 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 05:44:21.762516   14122 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 05:44:21.765389   14122 out.go:252]   - Generating certificates and keys ...
	I1210 05:44:21.765508   14122 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 05:44:21.765621   14122 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 05:44:22.405231   14122 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 05:44:22.530593   14122 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 05:44:22.809538   14122 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 05:44:23.110980   14122 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 05:44:23.197722   14122 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 05:44:23.197881   14122 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-028052 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1210 05:44:23.624843   14122 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 05:44:23.624956   14122 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-028052 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1210 05:44:23.957276   14122 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 05:44:24.334748   14122 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 05:44:24.424527   14122 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 05:44:24.424607   14122 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 05:44:24.712431   14122 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 05:44:25.151928   14122 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 05:44:25.500330   14122 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 05:44:25.523126   14122 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 05:44:25.991809   14122 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 05:44:25.992188   14122 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 05:44:25.995755   14122 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 05:44:25.998415   14122 out.go:252]   - Booting up control plane ...
	I1210 05:44:25.998564   14122 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 05:44:25.998655   14122 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 05:44:25.998799   14122 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 05:44:26.022820   14122 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 05:44:26.022966   14122 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 05:44:26.029616   14122 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 05:44:26.029722   14122 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 05:44:26.029763   14122 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 05:44:26.130994   14122 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 05:44:26.131136   14122 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 05:44:26.632880   14122 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.940174ms
	I1210 05:44:26.636635   14122 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1210 05:44:26.636751   14122 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1210 05:44:26.636846   14122 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1210 05:44:26.636927   14122 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1210 05:44:28.186939   14122 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.550285711s
	I1210 05:44:28.989612   14122 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.352995916s
	I1210 05:44:30.637986   14122 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001337765s
	I1210 05:44:30.653251   14122 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 05:44:30.666039   14122 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 05:44:30.677682   14122 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 05:44:30.677945   14122 kubeadm.go:319] [mark-control-plane] Marking the node addons-028052 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 05:44:30.686801   14122 kubeadm.go:319] [bootstrap-token] Using token: 0fuqj9.zxta1qtzv9xa5hm8
	I1210 05:44:30.688710   14122 out.go:252]   - Configuring RBAC rules ...
	I1210 05:44:30.688863   14122 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 05:44:30.692429   14122 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 05:44:30.698509   14122 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 05:44:30.702387   14122 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 05:44:30.705073   14122 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 05:44:30.708364   14122 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 05:44:31.045157   14122 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 05:44:31.459023   14122 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1210 05:44:32.043534   14122 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1210 05:44:32.044373   14122 kubeadm.go:319] 
	I1210 05:44:32.044506   14122 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1210 05:44:32.044518   14122 kubeadm.go:319] 
	I1210 05:44:32.044652   14122 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1210 05:44:32.044673   14122 kubeadm.go:319] 
	I1210 05:44:32.044715   14122 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1210 05:44:32.044799   14122 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 05:44:32.044899   14122 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 05:44:32.044915   14122 kubeadm.go:319] 
	I1210 05:44:32.045006   14122 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1210 05:44:32.045018   14122 kubeadm.go:319] 
	I1210 05:44:32.045093   14122 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 05:44:32.045103   14122 kubeadm.go:319] 
	I1210 05:44:32.045173   14122 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1210 05:44:32.045290   14122 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 05:44:32.045400   14122 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 05:44:32.045414   14122 kubeadm.go:319] 
	I1210 05:44:32.045548   14122 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 05:44:32.045650   14122 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1210 05:44:32.045658   14122 kubeadm.go:319] 
	I1210 05:44:32.045787   14122 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 0fuqj9.zxta1qtzv9xa5hm8 \
	I1210 05:44:32.045927   14122 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:63e262019a0228173b835d7feaf739daf8c2f986042fc20415163ebad5fe89a5 \
	I1210 05:44:32.045959   14122 kubeadm.go:319] 	--control-plane 
	I1210 05:44:32.045967   14122 kubeadm.go:319] 
	I1210 05:44:32.046090   14122 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1210 05:44:32.046118   14122 kubeadm.go:319] 
	I1210 05:44:32.046262   14122 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 0fuqj9.zxta1qtzv9xa5hm8 \
	I1210 05:44:32.046412   14122 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:63e262019a0228173b835d7feaf739daf8c2f986042fc20415163ebad5fe89a5 
	I1210 05:44:32.048269   14122 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1210 05:44:32.048401   14122 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 05:44:32.048431   14122 cni.go:84] Creating CNI manager for ""
	I1210 05:44:32.048440   14122 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 05:44:32.050453   14122 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1210 05:44:32.051993   14122 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1210 05:44:32.056198   14122 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1210 05:44:32.056221   14122 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1210 05:44:32.069246   14122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1210 05:44:32.269304   14122 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 05:44:32.269373   14122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:44:32.269392   14122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-028052 minikube.k8s.io/updated_at=2025_12_10T05_44_32_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=edc6abd3c0573b88c7a02dc35aa0b985627fa3e9 minikube.k8s.io/name=addons-028052 minikube.k8s.io/primary=true
	I1210 05:44:32.350271   14122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:44:32.359606   14122 ops.go:34] apiserver oom_adj: -16
	I1210 05:44:32.850320   14122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:44:33.350316   14122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:44:33.850615   14122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:44:34.350340   14122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:44:34.851300   14122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:44:35.351399   14122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:44:35.851528   14122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:44:36.350527   14122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:44:36.851219   14122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:44:36.915407   14122 kubeadm.go:1114] duration metric: took 4.646092846s to wait for elevateKubeSystemPrivileges
	I1210 05:44:36.915440   14122 kubeadm.go:403] duration metric: took 15.384164126s to StartCluster
	I1210 05:44:36.915459   14122 settings.go:142] acquiring lock: {Name:mkcfa52e2e09cf8266d26c2d1d1f162454a79515 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:44:36.915585   14122 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22089-8832/kubeconfig
	I1210 05:44:36.915943   14122 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/kubeconfig: {Name:mk2d0febd8c6a30a71f02d20e2057fd6d147cd76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:44:36.916151   14122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1210 05:44:36.916185   14122 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 05:44:36.916232   14122 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1210 05:44:36.916351   14122 addons.go:70] Setting yakd=true in profile "addons-028052"
	I1210 05:44:36.916362   14122 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-028052"
	I1210 05:44:36.916376   14122 addons.go:239] Setting addon yakd=true in "addons-028052"
	I1210 05:44:36.916383   14122 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-028052"
	I1210 05:44:36.916395   14122 addons.go:70] Setting registry=true in profile "addons-028052"
	I1210 05:44:36.916421   14122 host.go:66] Checking if "addons-028052" exists ...
	I1210 05:44:36.916426   14122 addons.go:70] Setting registry-creds=true in profile "addons-028052"
	I1210 05:44:36.916428   14122 config.go:182] Loaded profile config "addons-028052": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 05:44:36.916433   14122 addons.go:70] Setting default-storageclass=true in profile "addons-028052"
	I1210 05:44:36.916460   14122 addons.go:239] Setting addon registry-creds=true in "addons-028052"
	I1210 05:44:36.916486   14122 addons.go:70] Setting ingress-dns=true in profile "addons-028052"
	I1210 05:44:36.916493   14122 host.go:66] Checking if "addons-028052" exists ...
	I1210 05:44:36.916442   14122 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-028052"
	I1210 05:44:36.916496   14122 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-028052"
	I1210 05:44:36.916507   14122 addons.go:70] Setting cloud-spanner=true in profile "addons-028052"
	I1210 05:44:36.916518   14122 addons.go:70] Setting inspektor-gadget=true in profile "addons-028052"
	I1210 05:44:36.916524   14122 addons.go:239] Setting addon cloud-spanner=true in "addons-028052"
	I1210 05:44:36.916536   14122 addons.go:239] Setting addon inspektor-gadget=true in "addons-028052"
	I1210 05:44:36.916550   14122 host.go:66] Checking if "addons-028052" exists ...
	I1210 05:44:36.916590   14122 host.go:66] Checking if "addons-028052" exists ...
	I1210 05:44:36.916599   14122 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-028052"
	I1210 05:44:36.916612   14122 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-028052"
	I1210 05:44:36.916730   14122 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-028052"
	I1210 05:44:36.916777   14122 host.go:66] Checking if "addons-028052" exists ...
	I1210 05:44:36.916840   14122 cli_runner.go:164] Run: docker container inspect addons-028052 --format={{.State.Status}}
	I1210 05:44:36.916899   14122 cli_runner.go:164] Run: docker container inspect addons-028052 --format={{.State.Status}}
	I1210 05:44:36.916975   14122 cli_runner.go:164] Run: docker container inspect addons-028052 --format={{.State.Status}}
	I1210 05:44:36.916986   14122 cli_runner.go:164] Run: docker container inspect addons-028052 --format={{.State.Status}}
	I1210 05:44:36.917017   14122 cli_runner.go:164] Run: docker container inspect addons-028052 --format={{.State.Status}}
	I1210 05:44:36.917072   14122 cli_runner.go:164] Run: docker container inspect addons-028052 --format={{.State.Status}}
	I1210 05:44:36.917307   14122 cli_runner.go:164] Run: docker container inspect addons-028052 --format={{.State.Status}}
	I1210 05:44:36.917356   14122 addons.go:70] Setting metrics-server=true in profile "addons-028052"
	I1210 05:44:36.917379   14122 addons.go:239] Setting addon metrics-server=true in "addons-028052"
	I1210 05:44:36.917415   14122 host.go:66] Checking if "addons-028052" exists ...
	I1210 05:44:36.916353   14122 addons.go:70] Setting gcp-auth=true in profile "addons-028052"
	I1210 05:44:36.917606   14122 mustload.go:66] Loading cluster: addons-028052
	I1210 05:44:36.917867   14122 config.go:182] Loaded profile config "addons-028052": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 05:44:36.918157   14122 cli_runner.go:164] Run: docker container inspect addons-028052 --format={{.State.Status}}
	I1210 05:44:36.918240   14122 addons.go:70] Setting storage-provisioner=true in profile "addons-028052"
	I1210 05:44:36.918263   14122 addons.go:239] Setting addon storage-provisioner=true in "addons-028052"
	I1210 05:44:36.918287   14122 host.go:66] Checking if "addons-028052" exists ...
	I1210 05:44:36.916454   14122 addons.go:239] Setting addon registry=true in "addons-028052"
	I1210 05:44:36.918674   14122 host.go:66] Checking if "addons-028052" exists ...
	I1210 05:44:36.916420   14122 host.go:66] Checking if "addons-028052" exists ...
	I1210 05:44:36.918179   14122 addons.go:70] Setting volcano=true in profile "addons-028052"
	I1210 05:44:36.918940   14122 addons.go:239] Setting addon volcano=true in "addons-028052"
	I1210 05:44:36.918965   14122 host.go:66] Checking if "addons-028052" exists ...
	I1210 05:44:36.918159   14122 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-028052"
	I1210 05:44:36.919092   14122 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-028052"
	I1210 05:44:36.919121   14122 host.go:66] Checking if "addons-028052" exists ...
	I1210 05:44:36.919228   14122 cli_runner.go:164] Run: docker container inspect addons-028052 --format={{.State.Status}}
	I1210 05:44:36.916461   14122 addons.go:70] Setting ingress=true in profile "addons-028052"
	I1210 05:44:36.919285   14122 addons.go:239] Setting addon ingress=true in "addons-028052"
	I1210 05:44:36.919315   14122 host.go:66] Checking if "addons-028052" exists ...
	I1210 05:44:36.919373   14122 cli_runner.go:164] Run: docker container inspect addons-028052 --format={{.State.Status}}
	I1210 05:44:36.919559   14122 out.go:179] * Verifying Kubernetes components...
	I1210 05:44:36.918191   14122 addons.go:70] Setting volumesnapshots=true in profile "addons-028052"
	I1210 05:44:36.919654   14122 addons.go:239] Setting addon volumesnapshots=true in "addons-028052"
	I1210 05:44:36.919680   14122 host.go:66] Checking if "addons-028052" exists ...
	I1210 05:44:36.916501   14122 addons.go:239] Setting addon ingress-dns=true in "addons-028052"
	I1210 05:44:36.920026   14122 host.go:66] Checking if "addons-028052" exists ...
	I1210 05:44:36.921287   14122 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:44:36.924995   14122 cli_runner.go:164] Run: docker container inspect addons-028052 --format={{.State.Status}}
	I1210 05:44:36.925131   14122 cli_runner.go:164] Run: docker container inspect addons-028052 --format={{.State.Status}}
	I1210 05:44:36.925135   14122 cli_runner.go:164] Run: docker container inspect addons-028052 --format={{.State.Status}}
	I1210 05:44:36.925283   14122 cli_runner.go:164] Run: docker container inspect addons-028052 --format={{.State.Status}}
	I1210 05:44:36.925733   14122 cli_runner.go:164] Run: docker container inspect addons-028052 --format={{.State.Status}}
	I1210 05:44:36.929403   14122 cli_runner.go:164] Run: docker container inspect addons-028052 --format={{.State.Status}}
	I1210 05:44:36.929561   14122 cli_runner.go:164] Run: docker container inspect addons-028052 --format={{.State.Status}}
	I1210 05:44:36.965619   14122 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1210 05:44:36.970115   14122 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1210 05:44:36.970136   14122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1210 05:44:36.970215   14122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028052
	I1210 05:44:36.982194   14122 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-028052"
	I1210 05:44:36.982248   14122 host.go:66] Checking if "addons-028052" exists ...
	I1210 05:44:36.982786   14122 cli_runner.go:164] Run: docker container inspect addons-028052 --format={{.State.Status}}
	I1210 05:44:36.996915   14122 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1210 05:44:36.998557   14122 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1210 05:44:36.998581   14122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1210 05:44:36.998640   14122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028052
	I1210 05:44:37.011279   14122 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1210 05:44:37.014638   14122 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1210 05:44:37.014662   14122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1210 05:44:37.014745   14122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028052
	I1210 05:44:37.026143   14122 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1210 05:44:37.026362   14122 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1210 05:44:37.027677   14122 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1210 05:44:37.027700   14122 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1210 05:44:37.027790   14122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028052
	W1210 05:44:37.028641   14122 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1210 05:44:37.029439   14122 addons.go:239] Setting addon default-storageclass=true in "addons-028052"
	I1210 05:44:37.029692   14122 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1210 05:44:37.029499   14122 host.go:66] Checking if "addons-028052" exists ...
	I1210 05:44:37.031514   14122 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1210 05:44:37.031562   14122 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1210 05:44:37.031607   14122 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 05:44:37.031613   14122 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1210 05:44:37.031767   14122 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1210 05:44:37.033771   14122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1210 05:44:37.033872   14122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028052
	I1210 05:44:37.035034   14122 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1210 05:44:37.035085   14122 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1210 05:44:37.035099   14122 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1210 05:44:37.035125   14122 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1210 05:44:37.035151   14122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1210 05:44:37.035157   14122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028052
	I1210 05:44:37.035226   14122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028052
	I1210 05:44:37.035420   14122 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1210 05:44:37.035509   14122 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:44:37.035519   14122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 05:44:37.035568   14122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028052
	I1210 05:44:37.036421   14122 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1210 05:44:37.036542   14122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1210 05:44:37.036647   14122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028052
	I1210 05:44:37.036589   14122 out.go:179]   - Using image docker.io/registry:3.0.0
	I1210 05:44:37.038417   14122 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1210 05:44:37.039694   14122 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1210 05:44:37.039878   14122 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1210 05:44:37.039918   14122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1210 05:44:37.039956   14122 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1210 05:44:37.039976   14122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1210 05:44:37.039988   14122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028052
	I1210 05:44:37.040019   14122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028052
	I1210 05:44:37.044496   14122 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1210 05:44:37.049338   14122 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1210 05:44:37.052265   14122 cli_runner.go:164] Run: docker container inspect addons-028052 --format={{.State.Status}}
	I1210 05:44:37.053019   14122 host.go:66] Checking if "addons-028052" exists ...
	I1210 05:44:37.054750   14122 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1210 05:44:37.059136   14122 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1210 05:44:37.062134   14122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/addons-028052/id_rsa Username:docker}
	I1210 05:44:37.062274   14122 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1210 05:44:37.062445   14122 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1210 05:44:37.062921   14122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1210 05:44:37.063717   14122 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1210 05:44:37.063793   14122 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1210 05:44:37.063807   14122 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1210 05:44:37.063893   14122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028052
	I1210 05:44:37.065852   14122 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1210 05:44:37.066181   14122 out.go:179]   - Using image docker.io/busybox:stable
	I1210 05:44:37.067317   14122 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1210 05:44:37.067374   14122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1210 05:44:37.067462   14122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028052
	I1210 05:44:37.067502   14122 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1210 05:44:37.077665   14122 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1210 05:44:37.077692   14122 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1210 05:44:37.077763   14122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028052
	I1210 05:44:37.123224   14122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/addons-028052/id_rsa Username:docker}
	I1210 05:44:37.123823   14122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/addons-028052/id_rsa Username:docker}
	I1210 05:44:37.124606   14122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/addons-028052/id_rsa Username:docker}
	I1210 05:44:37.130440   14122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/addons-028052/id_rsa Username:docker}
	I1210 05:44:37.133236   14122 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 05:44:37.133258   14122 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 05:44:37.133314   14122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028052
	I1210 05:44:37.136208   14122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/addons-028052/id_rsa Username:docker}
	I1210 05:44:37.172758   14122 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 05:44:37.176987   14122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/addons-028052/id_rsa Username:docker}
	I1210 05:44:37.178461   14122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/addons-028052/id_rsa Username:docker}
	I1210 05:44:37.194807   14122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/addons-028052/id_rsa Username:docker}
	I1210 05:44:37.194859   14122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/addons-028052/id_rsa Username:docker}
	I1210 05:44:37.203591   14122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/addons-028052/id_rsa Username:docker}
	W1210 05:44:37.207854   14122 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1210 05:44:37.208052   14122 retry.go:31] will retry after 242.112943ms: ssh: handshake failed: EOF
	I1210 05:44:37.210490   14122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/addons-028052/id_rsa Username:docker}
	I1210 05:44:37.210393   14122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/addons-028052/id_rsa Username:docker}
	W1210 05:44:37.218536   14122 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1210 05:44:37.218570   14122 retry.go:31] will retry after 261.844164ms: ssh: handshake failed: EOF
	I1210 05:44:37.219758   14122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/addons-028052/id_rsa Username:docker}
	I1210 05:44:37.231388   14122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/addons-028052/id_rsa Username:docker}
	I1210 05:44:37.333046   14122 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1210 05:44:37.333068   14122 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1210 05:44:37.339629   14122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1210 05:44:37.340239   14122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1210 05:44:37.344570   14122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1210 05:44:37.355786   14122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1210 05:44:37.362705   14122 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1210 05:44:37.362732   14122 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1210 05:44:37.363771   14122 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1210 05:44:37.363794   14122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1210 05:44:37.365359   14122 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1210 05:44:37.365379   14122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1210 05:44:37.367873   14122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1210 05:44:37.382710   14122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:44:37.384080   14122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1210 05:44:37.396128   14122 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1210 05:44:37.396160   14122 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1210 05:44:37.401107   14122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1210 05:44:37.401879   14122 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1210 05:44:37.401901   14122 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1210 05:44:37.402874   14122 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1210 05:44:37.402889   14122 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1210 05:44:37.402892   14122 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1210 05:44:37.402907   14122 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1210 05:44:37.410116   14122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1210 05:44:37.441543   14122 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1210 05:44:37.441591   14122 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1210 05:44:37.444696   14122 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1210 05:44:37.444718   14122 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1210 05:44:37.454408   14122 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 05:44:37.454431   14122 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1210 05:44:37.466734   14122 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1210 05:44:37.466764   14122 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1210 05:44:37.499004   14122 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1210 05:44:37.499027   14122 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1210 05:44:37.517126   14122 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1210 05:44:37.517159   14122 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1210 05:44:37.521942   14122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 05:44:37.527927   14122 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1210 05:44:37.527952   14122 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1210 05:44:37.537017   14122 node_ready.go:35] waiting up to 6m0s for node "addons-028052" to be "Ready" ...
	I1210 05:44:37.537297   14122 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1210 05:44:37.574373   14122 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1210 05:44:37.574395   14122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1210 05:44:37.576094   14122 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1210 05:44:37.576114   14122 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1210 05:44:37.613145   14122 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1210 05:44:37.613173   14122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1210 05:44:37.625779   14122 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1210 05:44:37.625895   14122 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1210 05:44:37.647792   14122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1210 05:44:37.690732   14122 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1210 05:44:37.690764   14122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1210 05:44:37.694199   14122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1210 05:44:37.709767   14122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:44:37.728327   14122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1210 05:44:37.754995   14122 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1210 05:44:37.755020   14122 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1210 05:44:37.789895   14122 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1210 05:44:37.789923   14122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1210 05:44:37.822506   14122 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1210 05:44:37.822545   14122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1210 05:44:37.871415   14122 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1210 05:44:37.871443   14122 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1210 05:44:37.904015   14122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1210 05:44:38.047851   14122 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-028052" context rescaled to 1 replicas
	I1210 05:44:38.568903   14122 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.167759983s)
	I1210 05:44:38.568903   14122 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.184783302s)
	I1210 05:44:38.568957   14122 addons.go:495] Verifying addon ingress=true in "addons-028052"
	I1210 05:44:38.568959   14122 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.15879269s)
	I1210 05:44:38.568985   14122 addons.go:495] Verifying addon registry=true in "addons-028052"
	I1210 05:44:38.569137   14122 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.047163203s)
	I1210 05:44:38.569165   14122 addons.go:495] Verifying addon metrics-server=true in "addons-028052"
	I1210 05:44:38.570608   14122 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-028052 service yakd-dashboard -n yakd-dashboard
	
	I1210 05:44:38.570617   14122 out.go:179] * Verifying ingress addon...
	I1210 05:44:38.570610   14122 out.go:179] * Verifying registry addon...
	I1210 05:44:38.572703   14122 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1210 05:44:38.572819   14122 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1210 05:44:38.575417   14122 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1210 05:44:38.575541   14122 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1210 05:44:38.575556   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:39.009973   14122 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.315713868s)
	W1210 05:44:39.010024   14122 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1210 05:44:39.010062   14122 retry.go:31] will retry after 152.124693ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1210 05:44:39.010073   14122 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.300248606s)
	I1210 05:44:39.010143   14122 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.28178285s)
	I1210 05:44:39.010361   14122 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.106308913s)
	I1210 05:44:39.010380   14122 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-028052"
	I1210 05:44:39.012936   14122 out.go:179] * Verifying csi-hostpath-driver addon...
	I1210 05:44:39.015215   14122 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1210 05:44:39.017875   14122 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1210 05:44:39.017897   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:44:39.075670   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:39.075865   14122 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1210 05:44:39.075886   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:39.162949   14122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1210 05:44:39.519243   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 05:44:39.539855   14122 node_ready.go:57] node "addons-028052" has "Ready":"False" status (will retry)
	I1210 05:44:39.619903   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:39.620089   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:40.018255   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:44:40.119744   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:40.119952   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:40.518334   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:44:40.619489   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:40.619567   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:41.018871   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:44:41.119712   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:41.119919   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:41.518896   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 05:44:41.540759   14122 node_ready.go:57] node "addons-028052" has "Ready":"False" status (will retry)
	I1210 05:44:41.575598   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:41.575934   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:41.634370   14122 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.47137967s)
	I1210 05:44:42.018533   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:44:42.119233   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:42.119513   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:42.518546   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:44:42.575545   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:42.575545   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:43.018971   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:44:43.120233   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:43.120595   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:43.518741   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:44:43.575634   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:43.575845   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:44.018806   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 05:44:44.040236   14122 node_ready.go:57] node "addons-028052" has "Ready":"False" status (will retry)
	I1210 05:44:44.119986   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:44.120041   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:44.518691   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:44:44.575616   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:44.575832   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:44.660283   14122 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1210 05:44:44.660347   14122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028052
	I1210 05:44:44.678548   14122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/addons-028052/id_rsa Username:docker}
	I1210 05:44:44.785765   14122 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1210 05:44:44.799188   14122 addons.go:239] Setting addon gcp-auth=true in "addons-028052"
	I1210 05:44:44.799264   14122 host.go:66] Checking if "addons-028052" exists ...
	I1210 05:44:44.799672   14122 cli_runner.go:164] Run: docker container inspect addons-028052 --format={{.State.Status}}
	I1210 05:44:44.817202   14122 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1210 05:44:44.817274   14122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028052
	I1210 05:44:44.834834   14122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/addons-028052/id_rsa Username:docker}
	I1210 05:44:44.928692   14122 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1210 05:44:44.930021   14122 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1210 05:44:44.931319   14122 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1210 05:44:44.931332   14122 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1210 05:44:44.945014   14122 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1210 05:44:44.945040   14122 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1210 05:44:44.958160   14122 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1210 05:44:44.958183   14122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1210 05:44:44.971062   14122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1210 05:44:45.018505   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:44:45.076062   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:45.076251   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:45.284550   14122 addons.go:495] Verifying addon gcp-auth=true in "addons-028052"
	I1210 05:44:45.286541   14122 out.go:179] * Verifying gcp-auth addon...
	I1210 05:44:45.288685   14122 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1210 05:44:45.292285   14122 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1210 05:44:45.292305   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:44:45.518259   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:44:45.575438   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:45.575773   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:45.791605   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:44:46.017896   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 05:44:46.040368   14122 node_ready.go:57] node "addons-028052" has "Ready":"False" status (will retry)
	I1210 05:44:46.076022   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:46.076147   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:46.291948   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:44:46.518716   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:44:46.575815   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:46.576023   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:46.791642   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:44:47.018444   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:44:47.076509   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:47.076769   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:47.292418   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:44:47.517981   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:44:47.577254   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:47.577421   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:47.792103   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:44:48.018796   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:44:48.075689   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:48.075858   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:48.291702   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:44:48.518663   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 05:44:48.540175   14122 node_ready.go:57] node "addons-028052" has "Ready":"False" status (will retry)
	I1210 05:44:48.576000   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:48.576080   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:48.791818   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:44:49.018572   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:44:49.075558   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:49.075828   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:49.292414   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:44:49.518023   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:44:49.575873   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:49.576038   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:49.792018   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:44:50.018624   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:44:50.075408   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:50.075639   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:50.292781   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:44:50.518382   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:44:50.575703   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:50.575771   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:50.791583   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:44:51.018204   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 05:44:51.039960   14122 node_ready.go:57] node "addons-028052" has "Ready":"False" status (will retry)
	I1210 05:44:51.075823   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:51.075928   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:51.291967   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:44:51.518564   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:44:51.575747   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:51.575912   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:51.791445   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:44:52.018299   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:44:52.076209   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:52.076431   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:52.292348   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:44:52.518212   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:44:52.576151   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:52.576337   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:52.791772   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:44:53.018220   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:44:53.076212   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:53.076278   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:53.291869   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:44:53.518557   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 05:44:53.540232   14122 node_ready.go:57] node "addons-028052" has "Ready":"False" status (will retry)
	I1210 05:44:53.575797   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:53.576115   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:53.791532   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:44:54.017890   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:44:54.075607   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:54.075689   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:54.291396   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:44:54.518016   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:44:54.576282   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:54.576324   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:54.792054   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:44:55.018618   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:44:55.075404   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:55.075582   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:55.291799   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:44:55.518391   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:44:55.575686   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:55.575823   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:55.791605   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:44:56.018222   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 05:44:56.039690   14122 node_ready.go:57] node "addons-028052" has "Ready":"False" status (will retry)
	I1210 05:44:56.076086   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:56.076238   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:56.291822   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:44:56.518287   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:44:56.576212   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:56.576359   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:56.791973   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:44:57.017741   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:44:57.075757   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:57.075905   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:57.291668   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:44:57.518453   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:44:57.575513   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:57.575619   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:57.792238   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:44:58.017816   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 05:44:58.040312   14122 node_ready.go:57] node "addons-028052" has "Ready":"False" status (will retry)
	I1210 05:44:58.075676   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:58.075868   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:58.291359   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:44:58.517911   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:44:58.575776   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:58.575867   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:58.791272   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:44:59.017604   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:44:59.075639   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:59.075705   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:59.291120   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:44:59.517604   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:44:59.575660   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:59.575687   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:59.791891   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:00.018715   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:00.075707   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:00.075730   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:00.291433   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:00.518020   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 05:45:00.540243   14122 node_ready.go:57] node "addons-028052" has "Ready":"False" status (will retry)
	I1210 05:45:00.576051   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:00.576244   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:00.792122   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:01.017683   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:01.075696   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:01.075912   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:01.291723   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:01.518401   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:01.575611   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:01.575663   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:01.791427   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:02.018009   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:02.076335   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:02.076514   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:02.292242   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:02.517897   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 05:45:02.540626   14122 node_ready.go:57] node "addons-028052" has "Ready":"False" status (will retry)
	I1210 05:45:02.576090   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:02.576243   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:02.791697   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:03.018235   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:03.075610   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:03.075755   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:03.292412   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:03.518043   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:03.575990   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:03.576133   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:03.791739   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:04.018368   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:04.075307   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:04.075418   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:04.291968   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:04.518771   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:04.575869   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:04.575914   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:04.791366   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:05.017946   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 05:45:05.040545   14122 node_ready.go:57] node "addons-028052" has "Ready":"False" status (will retry)
	I1210 05:45:05.076141   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:05.076253   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:05.291805   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:05.518723   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:05.575942   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:05.576006   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:05.792126   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:06.017515   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:06.075524   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:06.075637   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:06.291772   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:06.518541   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:06.575531   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:06.575700   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:06.792096   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:07.018524   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:07.075606   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:07.075761   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:07.291526   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:07.518176   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 05:45:07.539513   14122 node_ready.go:57] node "addons-028052" has "Ready":"False" status (will retry)
	I1210 05:45:07.576610   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:07.576745   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:07.791420   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:08.017950   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:08.075965   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:08.076024   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:08.291712   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:08.518396   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:08.576313   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:08.576325   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:08.791833   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:09.018715   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:09.075683   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:09.075828   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:09.291456   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:09.517819   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 05:45:09.540272   14122 node_ready.go:57] node "addons-028052" has "Ready":"False" status (will retry)
	I1210 05:45:09.575691   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:09.575877   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:09.791703   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:10.018759   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:10.075915   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:10.075985   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:10.291570   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:10.518335   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:10.576488   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:10.576653   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:10.791207   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:11.018526   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:11.075424   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:11.075457   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:11.291633   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:11.518331   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:11.576066   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:11.576268   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:11.791362   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:12.018178   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 05:45:12.039623   14122 node_ready.go:57] node "addons-028052" has "Ready":"False" status (will retry)
	I1210 05:45:12.076307   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:12.076558   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:12.292060   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:12.517990   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:12.576072   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:12.576248   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:12.791934   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:13.018578   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:13.075564   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:13.075719   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:13.291337   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:13.517774   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:13.575900   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:13.576067   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:13.791681   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:14.018451   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 05:45:14.039874   14122 node_ready.go:57] node "addons-028052" has "Ready":"False" status (will retry)
	I1210 05:45:14.075624   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:14.075690   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:14.291236   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:14.517951   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:14.575766   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:14.575781   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:14.791777   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:15.018534   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:15.075576   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:15.075620   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:15.291269   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:15.517720   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:15.575624   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:15.575674   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:15.791311   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:16.017830   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 05:45:16.040025   14122 node_ready.go:57] node "addons-028052" has "Ready":"False" status (will retry)
	I1210 05:45:16.075541   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:16.075653   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:16.292191   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:16.517712   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:16.575485   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:16.575715   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:16.791193   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:17.017852   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:17.075980   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:17.076018   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:17.291557   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:17.518324   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:17.575532   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:17.575567   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:17.792167   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:18.018802   14122 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1210 05:45:18.018829   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:18.039658   14122 node_ready.go:49] node "addons-028052" is "Ready"
	I1210 05:45:18.039691   14122 node_ready.go:38] duration metric: took 40.502641864s for node "addons-028052" to be "Ready" ...
	I1210 05:45:18.039708   14122 api_server.go:52] waiting for apiserver process to appear ...
	I1210 05:45:18.039761   14122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:45:18.056952   14122 api_server.go:72] duration metric: took 41.140730527s to wait for apiserver process to appear ...
	I1210 05:45:18.056979   14122 api_server.go:88] waiting for apiserver healthz status ...
	I1210 05:45:18.057001   14122 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1210 05:45:18.062499   14122 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1210 05:45:18.063560   14122 api_server.go:141] control plane version: v1.34.2
	I1210 05:45:18.063607   14122 api_server.go:131] duration metric: took 6.618504ms to wait for apiserver health ...
	I1210 05:45:18.063619   14122 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 05:45:18.071935   14122 system_pods.go:59] 20 kube-system pods found
	I1210 05:45:18.071972   14122 system_pods.go:61] "amd-gpu-device-plugin-8nkkv" [b217b71d-a798-413e-b061-ddbeb921aa41] Pending
	I1210 05:45:18.071987   14122 system_pods.go:61] "coredns-66bc5c9577-rhtg8" [9967dafa-f0c9-4f91-ac48-ac57f6fdf9d4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 05:45:18.071992   14122 system_pods.go:61] "csi-hostpath-attacher-0" [273e2d3a-459c-4850-b160-c28f4960186e] Pending
	I1210 05:45:18.071998   14122 system_pods.go:61] "csi-hostpath-resizer-0" [6769a0cb-14fc-4d00-8c7d-66fa0447778b] Pending
	I1210 05:45:18.072004   14122 system_pods.go:61] "csi-hostpathplugin-8vnr8" [18a2714d-cf6e-42e5-a207-e5579e2cef92] Pending
	I1210 05:45:18.072009   14122 system_pods.go:61] "etcd-addons-028052" [154f2221-9ac1-4bd0-bc09-6beddc6c319d] Running
	I1210 05:45:18.072020   14122 system_pods.go:61] "kindnet-rvmds" [6d64ff3c-8220-4e32-a413-01c17f9e15f1] Running
	I1210 05:45:18.072028   14122 system_pods.go:61] "kube-apiserver-addons-028052" [fde1887d-6f28-4998-874b-4b4ab09b4e8c] Running
	I1210 05:45:18.072037   14122 system_pods.go:61] "kube-controller-manager-addons-028052" [81b5bf8e-98cf-4f8f-9eaf-64f1ce58774f] Running
	I1210 05:45:18.072046   14122 system_pods.go:61] "kube-ingress-dns-minikube" [76d2f5c4-191d-4a81-b811-659183a18624] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1210 05:45:18.072057   14122 system_pods.go:61] "kube-proxy-jrpnr" [4aef8104-61c3-48c2-8729-ee8680073a36] Running
	I1210 05:45:18.072063   14122 system_pods.go:61] "kube-scheduler-addons-028052" [9510b199-5cf3-4af0-b6d1-3d4de226f089] Running
	I1210 05:45:18.072072   14122 system_pods.go:61] "metrics-server-85b7d694d7-2mwh2" [dc1a0f7c-3439-4171-b8fe-ee86c125d8ee] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 05:45:18.072077   14122 system_pods.go:61] "nvidia-device-plugin-daemonset-n659m" [28d9824e-f8d8-4b30-8f85-dfcc1e1cdd63] Pending
	I1210 05:45:18.072082   14122 system_pods.go:61] "registry-6b586f9694-6cvjm" [f3e1613c-59b0-4d4e-9529-8f5b529027bb] Pending
	I1210 05:45:18.072087   14122 system_pods.go:61] "registry-creds-764b6fb674-zmx8t" [d4fbb573-287a-4093-afbe-313a0f4ca20b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1210 05:45:18.072095   14122 system_pods.go:61] "registry-proxy-kql6j" [82a3b310-71ed-4198-bba0-7ceeccfcaac0] Pending
	I1210 05:45:18.072106   14122 system_pods.go:61] "snapshot-controller-7d9fbc56b8-jptd2" [b6f577b8-eea1-4010-aa16-e038e8c88c79] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 05:45:18.072115   14122 system_pods.go:61] "snapshot-controller-7d9fbc56b8-vfr4b" [e1c84ba3-8bbf-49e1-88c9-a6589c8bd02c] Pending
	I1210 05:45:18.072123   14122 system_pods.go:61] "storage-provisioner" [30e21dab-7ac5-4f79-8d48-de67d0349344] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 05:45:18.072130   14122 system_pods.go:74] duration metric: took 8.504374ms to wait for pod list to return data ...
	I1210 05:45:18.072140   14122 default_sa.go:34] waiting for default service account to be created ...
	I1210 05:45:18.074152   14122 default_sa.go:45] found service account: "default"
	I1210 05:45:18.074175   14122 default_sa.go:55] duration metric: took 2.022645ms for default service account to be created ...
	I1210 05:45:18.074198   14122 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 05:45:18.078036   14122 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1210 05:45:18.078059   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:18.078718   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:18.079084   14122 system_pods.go:86] 20 kube-system pods found
	I1210 05:45:18.079103   14122 system_pods.go:89] "amd-gpu-device-plugin-8nkkv" [b217b71d-a798-413e-b061-ddbeb921aa41] Pending
	I1210 05:45:18.079110   14122 system_pods.go:89] "coredns-66bc5c9577-rhtg8" [9967dafa-f0c9-4f91-ac48-ac57f6fdf9d4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 05:45:18.079114   14122 system_pods.go:89] "csi-hostpath-attacher-0" [273e2d3a-459c-4850-b160-c28f4960186e] Pending
	I1210 05:45:18.079119   14122 system_pods.go:89] "csi-hostpath-resizer-0" [6769a0cb-14fc-4d00-8c7d-66fa0447778b] Pending
	I1210 05:45:18.079122   14122 system_pods.go:89] "csi-hostpathplugin-8vnr8" [18a2714d-cf6e-42e5-a207-e5579e2cef92] Pending
	I1210 05:45:18.079126   14122 system_pods.go:89] "etcd-addons-028052" [154f2221-9ac1-4bd0-bc09-6beddc6c319d] Running
	I1210 05:45:18.079129   14122 system_pods.go:89] "kindnet-rvmds" [6d64ff3c-8220-4e32-a413-01c17f9e15f1] Running
	I1210 05:45:18.079134   14122 system_pods.go:89] "kube-apiserver-addons-028052" [fde1887d-6f28-4998-874b-4b4ab09b4e8c] Running
	I1210 05:45:18.079137   14122 system_pods.go:89] "kube-controller-manager-addons-028052" [81b5bf8e-98cf-4f8f-9eaf-64f1ce58774f] Running
	I1210 05:45:18.079142   14122 system_pods.go:89] "kube-ingress-dns-minikube" [76d2f5c4-191d-4a81-b811-659183a18624] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1210 05:45:18.079145   14122 system_pods.go:89] "kube-proxy-jrpnr" [4aef8104-61c3-48c2-8729-ee8680073a36] Running
	I1210 05:45:18.079161   14122 system_pods.go:89] "kube-scheduler-addons-028052" [9510b199-5cf3-4af0-b6d1-3d4de226f089] Running
	I1210 05:45:18.079169   14122 system_pods.go:89] "metrics-server-85b7d694d7-2mwh2" [dc1a0f7c-3439-4171-b8fe-ee86c125d8ee] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 05:45:18.079172   14122 system_pods.go:89] "nvidia-device-plugin-daemonset-n659m" [28d9824e-f8d8-4b30-8f85-dfcc1e1cdd63] Pending
	I1210 05:45:18.079180   14122 system_pods.go:89] "registry-6b586f9694-6cvjm" [f3e1613c-59b0-4d4e-9529-8f5b529027bb] Pending
	I1210 05:45:18.079186   14122 system_pods.go:89] "registry-creds-764b6fb674-zmx8t" [d4fbb573-287a-4093-afbe-313a0f4ca20b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1210 05:45:18.079193   14122 system_pods.go:89] "registry-proxy-kql6j" [82a3b310-71ed-4198-bba0-7ceeccfcaac0] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1210 05:45:18.079201   14122 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jptd2" [b6f577b8-eea1-4010-aa16-e038e8c88c79] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 05:45:18.079210   14122 system_pods.go:89] "snapshot-controller-7d9fbc56b8-vfr4b" [e1c84ba3-8bbf-49e1-88c9-a6589c8bd02c] Pending
	I1210 05:45:18.079217   14122 system_pods.go:89] "storage-provisioner" [30e21dab-7ac5-4f79-8d48-de67d0349344] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 05:45:18.079233   14122 retry.go:31] will retry after 270.873681ms: missing components: kube-dns
	I1210 05:45:18.292121   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:18.396992   14122 system_pods.go:86] 20 kube-system pods found
	I1210 05:45:18.397039   14122 system_pods.go:89] "amd-gpu-device-plugin-8nkkv" [b217b71d-a798-413e-b061-ddbeb921aa41] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1210 05:45:18.397049   14122 system_pods.go:89] "coredns-66bc5c9577-rhtg8" [9967dafa-f0c9-4f91-ac48-ac57f6fdf9d4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 05:45:18.397058   14122 system_pods.go:89] "csi-hostpath-attacher-0" [273e2d3a-459c-4850-b160-c28f4960186e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1210 05:45:18.397074   14122 system_pods.go:89] "csi-hostpath-resizer-0" [6769a0cb-14fc-4d00-8c7d-66fa0447778b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1210 05:45:18.397083   14122 system_pods.go:89] "csi-hostpathplugin-8vnr8" [18a2714d-cf6e-42e5-a207-e5579e2cef92] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1210 05:45:18.397090   14122 system_pods.go:89] "etcd-addons-028052" [154f2221-9ac1-4bd0-bc09-6beddc6c319d] Running
	I1210 05:45:18.397097   14122 system_pods.go:89] "kindnet-rvmds" [6d64ff3c-8220-4e32-a413-01c17f9e15f1] Running
	I1210 05:45:18.397103   14122 system_pods.go:89] "kube-apiserver-addons-028052" [fde1887d-6f28-4998-874b-4b4ab09b4e8c] Running
	I1210 05:45:18.397109   14122 system_pods.go:89] "kube-controller-manager-addons-028052" [81b5bf8e-98cf-4f8f-9eaf-64f1ce58774f] Running
	I1210 05:45:18.397124   14122 system_pods.go:89] "kube-ingress-dns-minikube" [76d2f5c4-191d-4a81-b811-659183a18624] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1210 05:45:18.397130   14122 system_pods.go:89] "kube-proxy-jrpnr" [4aef8104-61c3-48c2-8729-ee8680073a36] Running
	I1210 05:45:18.397140   14122 system_pods.go:89] "kube-scheduler-addons-028052" [9510b199-5cf3-4af0-b6d1-3d4de226f089] Running
	I1210 05:45:18.397148   14122 system_pods.go:89] "metrics-server-85b7d694d7-2mwh2" [dc1a0f7c-3439-4171-b8fe-ee86c125d8ee] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 05:45:18.397158   14122 system_pods.go:89] "nvidia-device-plugin-daemonset-n659m" [28d9824e-f8d8-4b30-8f85-dfcc1e1cdd63] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1210 05:45:18.397170   14122 system_pods.go:89] "registry-6b586f9694-6cvjm" [f3e1613c-59b0-4d4e-9529-8f5b529027bb] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1210 05:45:18.397182   14122 system_pods.go:89] "registry-creds-764b6fb674-zmx8t" [d4fbb573-287a-4093-afbe-313a0f4ca20b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1210 05:45:18.397194   14122 system_pods.go:89] "registry-proxy-kql6j" [82a3b310-71ed-4198-bba0-7ceeccfcaac0] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1210 05:45:18.397202   14122 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jptd2" [b6f577b8-eea1-4010-aa16-e038e8c88c79] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 05:45:18.397212   14122 system_pods.go:89] "snapshot-controller-7d9fbc56b8-vfr4b" [e1c84ba3-8bbf-49e1-88c9-a6589c8bd02c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 05:45:18.397220   14122 system_pods.go:89] "storage-provisioner" [30e21dab-7ac5-4f79-8d48-de67d0349344] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 05:45:18.397238   14122 retry.go:31] will retry after 337.985151ms: missing components: kube-dns
	I1210 05:45:18.518860   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:18.575263   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:18.575355   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:18.739074   14122 system_pods.go:86] 20 kube-system pods found
	I1210 05:45:18.739113   14122 system_pods.go:89] "amd-gpu-device-plugin-8nkkv" [b217b71d-a798-413e-b061-ddbeb921aa41] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1210 05:45:18.739121   14122 system_pods.go:89] "coredns-66bc5c9577-rhtg8" [9967dafa-f0c9-4f91-ac48-ac57f6fdf9d4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 05:45:18.739127   14122 system_pods.go:89] "csi-hostpath-attacher-0" [273e2d3a-459c-4850-b160-c28f4960186e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1210 05:45:18.739133   14122 system_pods.go:89] "csi-hostpath-resizer-0" [6769a0cb-14fc-4d00-8c7d-66fa0447778b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1210 05:45:18.739138   14122 system_pods.go:89] "csi-hostpathplugin-8vnr8" [18a2714d-cf6e-42e5-a207-e5579e2cef92] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1210 05:45:18.739142   14122 system_pods.go:89] "etcd-addons-028052" [154f2221-9ac1-4bd0-bc09-6beddc6c319d] Running
	I1210 05:45:18.739148   14122 system_pods.go:89] "kindnet-rvmds" [6d64ff3c-8220-4e32-a413-01c17f9e15f1] Running
	I1210 05:45:18.739160   14122 system_pods.go:89] "kube-apiserver-addons-028052" [fde1887d-6f28-4998-874b-4b4ab09b4e8c] Running
	I1210 05:45:18.739176   14122 system_pods.go:89] "kube-controller-manager-addons-028052" [81b5bf8e-98cf-4f8f-9eaf-64f1ce58774f] Running
	I1210 05:45:18.739190   14122 system_pods.go:89] "kube-ingress-dns-minikube" [76d2f5c4-191d-4a81-b811-659183a18624] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1210 05:45:18.739196   14122 system_pods.go:89] "kube-proxy-jrpnr" [4aef8104-61c3-48c2-8729-ee8680073a36] Running
	I1210 05:45:18.739205   14122 system_pods.go:89] "kube-scheduler-addons-028052" [9510b199-5cf3-4af0-b6d1-3d4de226f089] Running
	I1210 05:45:18.739212   14122 system_pods.go:89] "metrics-server-85b7d694d7-2mwh2" [dc1a0f7c-3439-4171-b8fe-ee86c125d8ee] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 05:45:18.739235   14122 system_pods.go:89] "nvidia-device-plugin-daemonset-n659m" [28d9824e-f8d8-4b30-8f85-dfcc1e1cdd63] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1210 05:45:18.739245   14122 system_pods.go:89] "registry-6b586f9694-6cvjm" [f3e1613c-59b0-4d4e-9529-8f5b529027bb] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1210 05:45:18.739256   14122 system_pods.go:89] "registry-creds-764b6fb674-zmx8t" [d4fbb573-287a-4093-afbe-313a0f4ca20b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1210 05:45:18.739265   14122 system_pods.go:89] "registry-proxy-kql6j" [82a3b310-71ed-4198-bba0-7ceeccfcaac0] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1210 05:45:18.739277   14122 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jptd2" [b6f577b8-eea1-4010-aa16-e038e8c88c79] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 05:45:18.739286   14122 system_pods.go:89] "snapshot-controller-7d9fbc56b8-vfr4b" [e1c84ba3-8bbf-49e1-88c9-a6589c8bd02c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 05:45:18.739295   14122 system_pods.go:89] "storage-provisioner" [30e21dab-7ac5-4f79-8d48-de67d0349344] Running
	I1210 05:45:18.739314   14122 retry.go:31] will retry after 419.508515ms: missing components: kube-dns
	I1210 05:45:18.838272   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:19.019504   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:19.076153   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:19.076216   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:19.162927   14122 system_pods.go:86] 20 kube-system pods found
	I1210 05:45:19.162962   14122 system_pods.go:89] "amd-gpu-device-plugin-8nkkv" [b217b71d-a798-413e-b061-ddbeb921aa41] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1210 05:45:19.162979   14122 system_pods.go:89] "coredns-66bc5c9577-rhtg8" [9967dafa-f0c9-4f91-ac48-ac57f6fdf9d4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 05:45:19.162988   14122 system_pods.go:89] "csi-hostpath-attacher-0" [273e2d3a-459c-4850-b160-c28f4960186e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1210 05:45:19.162996   14122 system_pods.go:89] "csi-hostpath-resizer-0" [6769a0cb-14fc-4d00-8c7d-66fa0447778b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1210 05:45:19.163007   14122 system_pods.go:89] "csi-hostpathplugin-8vnr8" [18a2714d-cf6e-42e5-a207-e5579e2cef92] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1210 05:45:19.163013   14122 system_pods.go:89] "etcd-addons-028052" [154f2221-9ac1-4bd0-bc09-6beddc6c319d] Running
	I1210 05:45:19.163020   14122 system_pods.go:89] "kindnet-rvmds" [6d64ff3c-8220-4e32-a413-01c17f9e15f1] Running
	I1210 05:45:19.163026   14122 system_pods.go:89] "kube-apiserver-addons-028052" [fde1887d-6f28-4998-874b-4b4ab09b4e8c] Running
	I1210 05:45:19.163035   14122 system_pods.go:89] "kube-controller-manager-addons-028052" [81b5bf8e-98cf-4f8f-9eaf-64f1ce58774f] Running
	I1210 05:45:19.163044   14122 system_pods.go:89] "kube-ingress-dns-minikube" [76d2f5c4-191d-4a81-b811-659183a18624] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1210 05:45:19.163051   14122 system_pods.go:89] "kube-proxy-jrpnr" [4aef8104-61c3-48c2-8729-ee8680073a36] Running
	I1210 05:45:19.163058   14122 system_pods.go:89] "kube-scheduler-addons-028052" [9510b199-5cf3-4af0-b6d1-3d4de226f089] Running
	I1210 05:45:19.163067   14122 system_pods.go:89] "metrics-server-85b7d694d7-2mwh2" [dc1a0f7c-3439-4171-b8fe-ee86c125d8ee] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 05:45:19.163076   14122 system_pods.go:89] "nvidia-device-plugin-daemonset-n659m" [28d9824e-f8d8-4b30-8f85-dfcc1e1cdd63] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1210 05:45:19.163089   14122 system_pods.go:89] "registry-6b586f9694-6cvjm" [f3e1613c-59b0-4d4e-9529-8f5b529027bb] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1210 05:45:19.163097   14122 system_pods.go:89] "registry-creds-764b6fb674-zmx8t" [d4fbb573-287a-4093-afbe-313a0f4ca20b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1210 05:45:19.163105   14122 system_pods.go:89] "registry-proxy-kql6j" [82a3b310-71ed-4198-bba0-7ceeccfcaac0] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1210 05:45:19.163113   14122 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jptd2" [b6f577b8-eea1-4010-aa16-e038e8c88c79] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 05:45:19.163125   14122 system_pods.go:89] "snapshot-controller-7d9fbc56b8-vfr4b" [e1c84ba3-8bbf-49e1-88c9-a6589c8bd02c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 05:45:19.163133   14122 system_pods.go:89] "storage-provisioner" [30e21dab-7ac5-4f79-8d48-de67d0349344] Running
	I1210 05:45:19.163152   14122 retry.go:31] will retry after 543.949488ms: missing components: kube-dns
	I1210 05:45:19.291907   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:19.520367   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:19.577423   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:19.577567   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:19.712676   14122 system_pods.go:86] 20 kube-system pods found
	I1210 05:45:19.712713   14122 system_pods.go:89] "amd-gpu-device-plugin-8nkkv" [b217b71d-a798-413e-b061-ddbeb921aa41] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1210 05:45:19.712722   14122 system_pods.go:89] "coredns-66bc5c9577-rhtg8" [9967dafa-f0c9-4f91-ac48-ac57f6fdf9d4] Running
	I1210 05:45:19.712732   14122 system_pods.go:89] "csi-hostpath-attacher-0" [273e2d3a-459c-4850-b160-c28f4960186e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1210 05:45:19.712740   14122 system_pods.go:89] "csi-hostpath-resizer-0" [6769a0cb-14fc-4d00-8c7d-66fa0447778b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1210 05:45:19.712748   14122 system_pods.go:89] "csi-hostpathplugin-8vnr8" [18a2714d-cf6e-42e5-a207-e5579e2cef92] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1210 05:45:19.712763   14122 system_pods.go:89] "etcd-addons-028052" [154f2221-9ac1-4bd0-bc09-6beddc6c319d] Running
	I1210 05:45:19.712770   14122 system_pods.go:89] "kindnet-rvmds" [6d64ff3c-8220-4e32-a413-01c17f9e15f1] Running
	I1210 05:45:19.712775   14122 system_pods.go:89] "kube-apiserver-addons-028052" [fde1887d-6f28-4998-874b-4b4ab09b4e8c] Running
	I1210 05:45:19.712781   14122 system_pods.go:89] "kube-controller-manager-addons-028052" [81b5bf8e-98cf-4f8f-9eaf-64f1ce58774f] Running
	I1210 05:45:19.712789   14122 system_pods.go:89] "kube-ingress-dns-minikube" [76d2f5c4-191d-4a81-b811-659183a18624] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1210 05:45:19.712803   14122 system_pods.go:89] "kube-proxy-jrpnr" [4aef8104-61c3-48c2-8729-ee8680073a36] Running
	I1210 05:45:19.712817   14122 system_pods.go:89] "kube-scheduler-addons-028052" [9510b199-5cf3-4af0-b6d1-3d4de226f089] Running
	I1210 05:45:19.712826   14122 system_pods.go:89] "metrics-server-85b7d694d7-2mwh2" [dc1a0f7c-3439-4171-b8fe-ee86c125d8ee] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 05:45:19.712834   14122 system_pods.go:89] "nvidia-device-plugin-daemonset-n659m" [28d9824e-f8d8-4b30-8f85-dfcc1e1cdd63] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1210 05:45:19.712843   14122 system_pods.go:89] "registry-6b586f9694-6cvjm" [f3e1613c-59b0-4d4e-9529-8f5b529027bb] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1210 05:45:19.712852   14122 system_pods.go:89] "registry-creds-764b6fb674-zmx8t" [d4fbb573-287a-4093-afbe-313a0f4ca20b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1210 05:45:19.712865   14122 system_pods.go:89] "registry-proxy-kql6j" [82a3b310-71ed-4198-bba0-7ceeccfcaac0] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1210 05:45:19.712875   14122 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jptd2" [b6f577b8-eea1-4010-aa16-e038e8c88c79] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 05:45:19.712887   14122 system_pods.go:89] "snapshot-controller-7d9fbc56b8-vfr4b" [e1c84ba3-8bbf-49e1-88c9-a6589c8bd02c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 05:45:19.712893   14122 system_pods.go:89] "storage-provisioner" [30e21dab-7ac5-4f79-8d48-de67d0349344] Running
	I1210 05:45:19.712907   14122 system_pods.go:126] duration metric: took 1.638702436s to wait for k8s-apps to be running ...
	I1210 05:45:19.712924   14122 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 05:45:19.712973   14122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 05:45:19.728961   14122 system_svc.go:56] duration metric: took 16.027252ms WaitForService to wait for kubelet
	I1210 05:45:19.728991   14122 kubeadm.go:587] duration metric: took 42.812773667s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 05:45:19.729014   14122 node_conditions.go:102] verifying NodePressure condition ...
	I1210 05:45:19.732120   14122 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1210 05:45:19.732151   14122 node_conditions.go:123] node cpu capacity is 8
	I1210 05:45:19.732171   14122 node_conditions.go:105] duration metric: took 3.151408ms to run NodePressure ...
	I1210 05:45:19.732185   14122 start.go:242] waiting for startup goroutines ...
	I1210 05:45:19.811928   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:20.019134   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:20.075654   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:20.075670   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:20.292797   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:20.519444   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:20.620247   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:20.620297   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:20.792060   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:21.019190   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:21.075765   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:21.075805   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:21.292666   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:21.519875   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:21.577754   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:21.578203   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:21.792796   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:22.019041   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:22.075846   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:22.075895   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:22.291671   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:22.521076   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:22.575838   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:22.575868   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:22.791708   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:23.019197   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:23.076062   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:23.076104   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:23.291757   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:23.519354   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:23.576190   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:23.576229   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:23.791968   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:24.019389   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:24.076345   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:24.076375   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:24.292610   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:24.519371   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:24.576070   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:24.576142   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:24.792176   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:25.019117   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:25.075788   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:25.075860   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:25.292896   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:25.519025   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:25.577176   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:25.577346   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:25.794000   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:26.018902   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:26.076685   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:26.076873   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:26.291646   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:26.518901   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:26.575353   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:26.575542   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:26.792967   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:27.019093   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:27.076651   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:27.076825   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:27.291081   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:27.519641   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:27.576511   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:27.576711   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:27.792790   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:28.019099   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:28.076268   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:28.076316   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:28.291966   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:28.522301   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:28.576063   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:28.576125   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:28.791875   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:29.019185   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:29.075854   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:29.076013   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:29.291923   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:29.519597   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:29.575613   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:29.575785   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:29.792059   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:30.019342   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:30.075924   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:30.075946   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:30.291964   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:30.584745   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:30.584921   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:30.585329   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:30.792571   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:31.018700   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:31.076794   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:31.077022   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:31.292058   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:31.519983   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:31.575957   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:31.575980   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:31.792216   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:32.019271   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:32.076312   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:32.076503   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:32.292837   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:32.520372   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:32.576289   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:32.576493   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:32.792925   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:33.019223   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:33.120236   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:33.120289   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:33.292145   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:33.518725   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:33.576758   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:33.576866   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:33.795286   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:34.068710   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:34.076171   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:34.076264   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:34.292106   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:34.518945   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:34.619329   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:34.619411   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:34.793129   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:35.018634   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:35.076059   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:35.076170   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:35.292936   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:35.519381   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:35.576026   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:35.576089   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:35.792053   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:36.019599   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:36.076718   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:36.076758   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:36.292713   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:36.519306   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:36.575628   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:36.575668   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:36.792682   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:37.018749   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:37.076219   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:37.076289   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:37.292064   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:37.519310   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:37.576114   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:37.576205   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:37.792757   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:38.021668   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:38.079443   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:38.079590   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:38.292160   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:38.518953   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:38.576611   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:38.576664   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:38.792972   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:39.018852   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:39.076435   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:39.076461   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:39.292193   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:39.518936   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:39.577012   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:39.577109   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:39.792020   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:40.018799   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:40.076016   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:40.076056   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:40.291746   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:40.519549   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:40.619718   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:40.619968   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:40.792183   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:41.018422   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:41.076374   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:41.076398   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:41.292582   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:41.520349   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:41.621039   14122 kapi.go:107] duration metric: took 1m3.048330906s to wait for kubernetes.io/minikube-addons=registry ...
	I1210 05:45:41.621430   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:41.792672   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:42.019851   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:42.076996   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:42.292920   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:42.519195   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:42.576338   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:42.805033   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:43.022210   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:43.114229   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:43.292270   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:43.519907   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:43.621051   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:43.791553   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:44.019047   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:44.076883   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:44.291675   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:44.519376   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:44.582826   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:44.791318   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:45.018509   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:45.076260   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:45.292274   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:45.518215   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:45.575974   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:45.791979   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:46.020039   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:46.077509   14122 kapi.go:107] duration metric: took 1m7.504684566s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1210 05:45:46.292668   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:46.543803   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:46.792263   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:47.020391   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:47.292079   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:47.519151   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:47.792199   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:48.019415   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:48.292610   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:48.519212   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:48.792303   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:49.018675   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:49.291859   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:49.519649   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:49.793021   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:50.018733   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:50.291888   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:50.518887   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:50.792213   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:51.018348   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:51.292383   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:51.518556   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:51.793526   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:52.018500   14122 kapi.go:107] duration metric: took 1m13.003284005s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1210 05:45:52.292373   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:52.791324   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:53.292842   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:53.791596   14122 kapi.go:107] duration metric: took 1m8.502911061s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1210 05:45:53.793268   14122 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-028052 cluster.
	I1210 05:45:53.794685   14122 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1210 05:45:53.796097   14122 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1210 05:45:53.797493   14122 out.go:179] * Enabled addons: registry-creds, nvidia-device-plugin, ingress-dns, cloud-spanner, inspektor-gadget, default-storageclass, amd-gpu-device-plugin, metrics-server, yakd, storage-provisioner, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I1210 05:45:53.798787   14122 addons.go:530] duration metric: took 1m16.882551499s for enable addons: enabled=[registry-creds nvidia-device-plugin ingress-dns cloud-spanner inspektor-gadget default-storageclass amd-gpu-device-plugin metrics-server yakd storage-provisioner storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I1210 05:45:53.798830   14122 start.go:247] waiting for cluster config update ...
	I1210 05:45:53.798854   14122 start.go:256] writing updated cluster config ...
	I1210 05:45:53.799094   14122 ssh_runner.go:195] Run: rm -f paused
	I1210 05:45:53.803058   14122 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 05:45:53.806139   14122 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-rhtg8" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:45:53.810245   14122 pod_ready.go:94] pod "coredns-66bc5c9577-rhtg8" is "Ready"
	I1210 05:45:53.810271   14122 pod_ready.go:86] duration metric: took 4.109842ms for pod "coredns-66bc5c9577-rhtg8" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:45:53.812437   14122 pod_ready.go:83] waiting for pod "etcd-addons-028052" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:45:53.816395   14122 pod_ready.go:94] pod "etcd-addons-028052" is "Ready"
	I1210 05:45:53.816419   14122 pod_ready.go:86] duration metric: took 3.961406ms for pod "etcd-addons-028052" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:45:53.818275   14122 pod_ready.go:83] waiting for pod "kube-apiserver-addons-028052" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:45:53.821756   14122 pod_ready.go:94] pod "kube-apiserver-addons-028052" is "Ready"
	I1210 05:45:53.821776   14122 pod_ready.go:86] duration metric: took 3.48167ms for pod "kube-apiserver-addons-028052" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:45:53.823590   14122 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-028052" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:45:54.206595   14122 pod_ready.go:94] pod "kube-controller-manager-addons-028052" is "Ready"
	I1210 05:45:54.206627   14122 pod_ready.go:86] duration metric: took 383.017978ms for pod "kube-controller-manager-addons-028052" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:45:54.406798   14122 pod_ready.go:83] waiting for pod "kube-proxy-jrpnr" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:45:54.806884   14122 pod_ready.go:94] pod "kube-proxy-jrpnr" is "Ready"
	I1210 05:45:54.806908   14122 pod_ready.go:86] duration metric: took 400.084395ms for pod "kube-proxy-jrpnr" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:45:55.006739   14122 pod_ready.go:83] waiting for pod "kube-scheduler-addons-028052" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:45:55.406772   14122 pod_ready.go:94] pod "kube-scheduler-addons-028052" is "Ready"
	I1210 05:45:55.406800   14122 pod_ready.go:86] duration metric: took 400.035752ms for pod "kube-scheduler-addons-028052" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:45:55.406812   14122 pod_ready.go:40] duration metric: took 1.60372617s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 05:45:55.450990   14122 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1210 05:45:55.453996   14122 out.go:179] * Done! kubectl is now configured to use "addons-028052" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 10 05:48:39 addons-028052 crio[774]: time="2025-12-10T05:48:39.008023835Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-6kjqg/POD" id=3eed6c48-73c4-43dc-a8fc-3de53cdc0389 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 05:48:39 addons-028052 crio[774]: time="2025-12-10T05:48:39.008113629Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 05:48:39 addons-028052 crio[774]: time="2025-12-10T05:48:39.016004336Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-6kjqg Namespace:default ID:4381edbd6b98df9251e068dd03b0b089a73878aa0aeaca8fe65b8e0262a02d9e UID:21da0661-2e1c-4944-9625-566d0c3fd747 NetNS:/var/run/netns/21edfa10-7929-4c30-80f1-c10605d5807a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000eb8400}] Aliases:map[]}"
	Dec 10 05:48:39 addons-028052 crio[774]: time="2025-12-10T05:48:39.016043451Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-6kjqg to CNI network \"kindnet\" (type=ptp)"
	Dec 10 05:48:39 addons-028052 crio[774]: time="2025-12-10T05:48:39.027259136Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-6kjqg Namespace:default ID:4381edbd6b98df9251e068dd03b0b089a73878aa0aeaca8fe65b8e0262a02d9e UID:21da0661-2e1c-4944-9625-566d0c3fd747 NetNS:/var/run/netns/21edfa10-7929-4c30-80f1-c10605d5807a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000eb8400}] Aliases:map[]}"
	Dec 10 05:48:39 addons-028052 crio[774]: time="2025-12-10T05:48:39.027450015Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-6kjqg for CNI network kindnet (type=ptp)"
	Dec 10 05:48:39 addons-028052 crio[774]: time="2025-12-10T05:48:39.028741409Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 10 05:48:39 addons-028052 crio[774]: time="2025-12-10T05:48:39.029962683Z" level=info msg="Ran pod sandbox 4381edbd6b98df9251e068dd03b0b089a73878aa0aeaca8fe65b8e0262a02d9e with infra container: default/hello-world-app-5d498dc89-6kjqg/POD" id=3eed6c48-73c4-43dc-a8fc-3de53cdc0389 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 05:48:39 addons-028052 crio[774]: time="2025-12-10T05:48:39.031264814Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=9d1ebaa9-db78-48df-afbd-ae6db9fb9148 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 05:48:39 addons-028052 crio[774]: time="2025-12-10T05:48:39.0314073Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=9d1ebaa9-db78-48df-afbd-ae6db9fb9148 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 05:48:39 addons-028052 crio[774]: time="2025-12-10T05:48:39.031463749Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=9d1ebaa9-db78-48df-afbd-ae6db9fb9148 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 05:48:39 addons-028052 crio[774]: time="2025-12-10T05:48:39.032151633Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=e0d0c577-0086-453b-ab9c-b516de18ce67 name=/runtime.v1.ImageService/PullImage
	Dec 10 05:48:39 addons-028052 crio[774]: time="2025-12-10T05:48:39.03690781Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Dec 10 05:48:39 addons-028052 crio[774]: time="2025-12-10T05:48:39.396877928Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86" id=e0d0c577-0086-453b-ab9c-b516de18ce67 name=/runtime.v1.ImageService/PullImage
	Dec 10 05:48:39 addons-028052 crio[774]: time="2025-12-10T05:48:39.397461467Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=abcc2024-7b71-4f07-b91a-2430fc352179 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 05:48:39 addons-028052 crio[774]: time="2025-12-10T05:48:39.39897906Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=586d1e34-8dbd-4ad1-9976-3b28235c3d30 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 05:48:39 addons-028052 crio[774]: time="2025-12-10T05:48:39.403238817Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-6kjqg/hello-world-app" id=6c397a22-1bf3-499e-bfb1-c08bfa889fd6 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 05:48:39 addons-028052 crio[774]: time="2025-12-10T05:48:39.403368597Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 05:48:39 addons-028052 crio[774]: time="2025-12-10T05:48:39.409042256Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 05:48:39 addons-028052 crio[774]: time="2025-12-10T05:48:39.409272975Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/28cada620ca1e1ddb5f52736799c3349da134708e5939103f84b4d7bc63476da/merged/etc/passwd: no such file or directory"
	Dec 10 05:48:39 addons-028052 crio[774]: time="2025-12-10T05:48:39.409304073Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/28cada620ca1e1ddb5f52736799c3349da134708e5939103f84b4d7bc63476da/merged/etc/group: no such file or directory"
	Dec 10 05:48:39 addons-028052 crio[774]: time="2025-12-10T05:48:39.409605374Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 05:48:39 addons-028052 crio[774]: time="2025-12-10T05:48:39.449414848Z" level=info msg="Created container dc0a37ed8db247593c9cad0dae91951f0ad7ce3eee463f0d3bbcf696a6be28f3: default/hello-world-app-5d498dc89-6kjqg/hello-world-app" id=6c397a22-1bf3-499e-bfb1-c08bfa889fd6 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 05:48:39 addons-028052 crio[774]: time="2025-12-10T05:48:39.450217288Z" level=info msg="Starting container: dc0a37ed8db247593c9cad0dae91951f0ad7ce3eee463f0d3bbcf696a6be28f3" id=7fc5c763-4061-4349-bcb8-2499e92f021d name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 05:48:39 addons-028052 crio[774]: time="2025-12-10T05:48:39.453252333Z" level=info msg="Started container" PID=9481 containerID=dc0a37ed8db247593c9cad0dae91951f0ad7ce3eee463f0d3bbcf696a6be28f3 description=default/hello-world-app-5d498dc89-6kjqg/hello-world-app id=7fc5c763-4061-4349-bcb8-2499e92f021d name=/runtime.v1.RuntimeService/StartContainer sandboxID=4381edbd6b98df9251e068dd03b0b089a73878aa0aeaca8fe65b8e0262a02d9e
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	dc0a37ed8db24       docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86                                        Less than a second ago   Running             hello-world-app                          0                   4381edbd6b98d       hello-world-app-5d498dc89-6kjqg             default
	3a3efde58fa77       docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605                             2 minutes ago            Running             registry-creds                           0                   38414eb6eb817       registry-creds-764b6fb674-zmx8t             kube-system
	f8c2bf49c64f5       public.ecr.aws/nginx/nginx@sha256:97a145fb5809fd90ebdf66711f69b97e29ea99da5403c20310dcc425974a14f9                                           2 minutes ago            Running             nginx                                    0                   d70048123baf9       nginx                                       default
	9235598d48cf1       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          2 minutes ago            Running             busybox                                  0                   1aefb4fd76197       busybox                                     default
	1ab046fa4ded9       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 2 minutes ago            Running             gcp-auth                                 0                   3250592b20f40       gcp-auth-78565c9fb4-7rkqb                   gcp-auth
	16d883ea0cc67       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          2 minutes ago            Running             csi-snapshotter                          0                   338aeb5814900       csi-hostpathplugin-8vnr8                    kube-system
	736d6c57ec43c       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          2 minutes ago            Running             csi-provisioner                          0                   338aeb5814900       csi-hostpathplugin-8vnr8                    kube-system
	660e106c0ca88       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            2 minutes ago            Running             liveness-probe                           0                   338aeb5814900       csi-hostpathplugin-8vnr8                    kube-system
	b77860e4ca7d8       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           2 minutes ago            Running             hostpath                                 0                   338aeb5814900       csi-hostpathplugin-8vnr8                    kube-system
	ec45dc6fa552f       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:ea428be7b01d41418fca4d91ae3dff6b037bdc0d42757e7ad392a38536488a1a                            2 minutes ago            Running             gadget                                   0                   4177282b3fdfb       gadget-t97f6                                gadget
	15bdf91e47125       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                2 minutes ago            Running             node-driver-registrar                    0                   338aeb5814900       csi-hostpathplugin-8vnr8                    kube-system
	21f400e0a06c0       registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad                             2 minutes ago            Running             controller                               0                   70ea83e8528e7       ingress-nginx-controller-85d4c799dd-n2nrt   ingress-nginx
	b348e5c8e523a       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              2 minutes ago            Running             registry-proxy                           0                   1db5ee856e887       registry-proxy-kql6j                        kube-system
	03c1319ba40ad       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   68ec1d3bdaae1       nvidia-device-plugin-daemonset-n659m        kube-system
	30e7ebcfff065       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   3 minutes ago            Running             csi-external-health-monitor-controller   0                   338aeb5814900       csi-hostpathplugin-8vnr8                    kube-system
	1f872b473fd2a       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago            Running             volume-snapshot-controller               0                   89233a02cd793       snapshot-controller-7d9fbc56b8-jptd2        kube-system
	304fa9c779484       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     3 minutes ago            Running             amd-gpu-device-plugin                    0                   dfdc90a55197a       amd-gpu-device-plugin-8nkkv                 kube-system
	3d4ccc4d76ae4       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago            Running             volume-snapshot-controller               0                   e12ad4f5ad312       snapshot-controller-7d9fbc56b8-vfr4b        kube-system
	a0bbf399c1145       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             3 minutes ago            Running             csi-attacher                             0                   6fb4ac04bd0a4       csi-hostpath-attacher-0                     kube-system
	5f58fcc00134e       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              3 minutes ago            Running             csi-resizer                              0                   df7b1fb2ab674       csi-hostpath-resizer-0                      kube-system
	0ff28f657acbf       a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e                                                                             3 minutes ago            Exited              patch                                    1                   fcc0359cc56e0       ingress-nginx-admission-patch-297k4         ingress-nginx
	f7b4692966d8d       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             3 minutes ago            Running             local-path-provisioner                   0                   6a2ab6d92867c       local-path-provisioner-648f6765c9-lhr8t     local-path-storage
	8acf6b3a51621       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   3 minutes ago            Exited              create                                   0                   8ecdd7139be6f       ingress-nginx-admission-create-z5xgg        ingress-nginx
	dec533b105023       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           3 minutes ago            Running             registry                                 0                   99f23fb5e38de       registry-6b586f9694-6cvjm                   kube-system
	64067205d65ee       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              3 minutes ago            Running             yakd                                     0                   31a763da0a796       yakd-dashboard-5ff678cb9-7tm8b              yakd-dashboard
	7c725f36dd3b4       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        3 minutes ago            Running             metrics-server                           0                   9e4d253065ff7       metrics-server-85b7d694d7-2mwh2             kube-system
	6ed5ed25f8d19       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               3 minutes ago            Running             minikube-ingress-dns                     0                   9201ddf3b7e41       kube-ingress-dns-minikube                   kube-system
	844cf81982783       gcr.io/cloud-spanner-emulator/emulator@sha256:22a4d5b0f97bd0c2ee20da342493c5a60e40b4d62ec20c174cb32ff4ee1f65bf                               3 minutes ago            Running             cloud-spanner-emulator                   0                   985dbf2e9cbca       cloud-spanner-emulator-5bdddb765-qb5vh      default
	9d1fa5291d10e       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             3 minutes ago            Running             coredns                                  0                   24cc0b5a03870       coredns-66bc5c9577-rhtg8                    kube-system
	58125e9bcfadd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             3 minutes ago            Running             storage-provisioner                      0                   b2a60f3d36b30       storage-provisioner                         kube-system
	fbc11ef328020       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             4 minutes ago            Running             kindnet-cni                              0                   f2214b915976d       kindnet-rvmds                               kube-system
	9497319e6c1c1       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                                                             4 minutes ago            Running             kube-proxy                               0                   1c4e7abfd2ce9       kube-proxy-jrpnr                            kube-system
	0122c6e10b651       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                                                             4 minutes ago            Running             kube-scheduler                           0                   a140c8c7e5204       kube-scheduler-addons-028052                kube-system
	f1f5e9bce84f7       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                                                             4 minutes ago            Running             kube-controller-manager                  0                   9042190ab2d70       kube-controller-manager-addons-028052       kube-system
	65e519df51c1d       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                                                             4 minutes ago            Running             kube-apiserver                           0                   eb93d9eed3221       kube-apiserver-addons-028052                kube-system
	965d086a638c9       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                                             4 minutes ago            Running             etcd                                     0                   6ae4a87efe1f6       etcd-addons-028052                          kube-system
	
	
	==> coredns [9d1fa5291d10e03a9903b7e6298d010ed5ca423741104638ae3883dcb6a99dce] <==
	[INFO] 10.244.0.22:54054 - 7752 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000160634s
	[INFO] 10.244.0.22:43035 - 46267 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.00893555s
	[INFO] 10.244.0.22:55920 - 54747 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.009294086s
	[INFO] 10.244.0.22:48051 - 57961 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004975862s
	[INFO] 10.244.0.22:48123 - 26820 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.00514599s
	[INFO] 10.244.0.22:34261 - 5022 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004381886s
	[INFO] 10.244.0.22:51690 - 13895 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.006358568s
	[INFO] 10.244.0.22:44085 - 19583 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00114011s
	[INFO] 10.244.0.22:50556 - 14900 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.003151752s
	[INFO] 10.244.0.28:59168 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000248721s
	[INFO] 10.244.0.28:55622 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000174707s
	[INFO] 10.244.0.29:57624 - 41042 "A IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000227353s
	[INFO] 10.244.0.29:58100 - 30576 "AAAA IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000311674s
	[INFO] 10.244.0.29:59790 - 62092 "A IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.00014129s
	[INFO] 10.244.0.29:45196 - 46170 "AAAA IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000203431s
	[INFO] 10.244.0.29:44095 - 41568 "A IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000106847s
	[INFO] 10.244.0.29:40135 - 2177 "AAAA IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000134411s
	[INFO] 10.244.0.29:60809 - 33351 "A IN accounts.google.com.us-east4-a.c.k8s-minikube.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 179 0.007236121s
	[INFO] 10.244.0.29:50091 - 35565 "AAAA IN accounts.google.com.us-east4-a.c.k8s-minikube.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 179 0.008657749s
	[INFO] 10.244.0.29:45354 - 47223 "AAAA IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.006343962s
	[INFO] 10.244.0.29:33002 - 39709 "A IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.007468678s
	[INFO] 10.244.0.29:34471 - 15104 "AAAA IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.006047424s
	[INFO] 10.244.0.29:36554 - 60655 "A IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.006879089s
	[INFO] 10.244.0.29:36590 - 60859 "A IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 72 0.001953185s
	[INFO] 10.244.0.29:35643 - 17278 "AAAA IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 84 0.002214282s
	
	
	==> describe nodes <==
	Name:               addons-028052
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-028052
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=edc6abd3c0573b88c7a02dc35aa0b985627fa3e9
	                    minikube.k8s.io/name=addons-028052
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T05_44_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-028052
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-028052"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 05:44:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-028052
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 05:48:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 05:48:16 +0000   Wed, 10 Dec 2025 05:44:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 05:48:16 +0000   Wed, 10 Dec 2025 05:44:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 05:48:16 +0000   Wed, 10 Dec 2025 05:44:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Dec 2025 05:48:16 +0000   Wed, 10 Dec 2025 05:45:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-028052
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 0992b7e47f4f804d2f02c3066938a460
	  System UUID:                395aad63-f01b-4f03-a5d4-f3c6cb3cd468
	  Boot ID:                    cce7104c-1270-4b6b-af66-b04ce0de633c
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (29 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m45s
	  default                     cloud-spanner-emulator-5bdddb765-qb5vh       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m3s
	  default                     hello-world-app-5d498dc89-6kjqg              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m26s
	  gadget                      gadget-t97f6                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  gcp-auth                    gcp-auth-78565c9fb4-7rkqb                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m55s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-n2nrt    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         4m2s
	  kube-system                 amd-gpu-device-plugin-8nkkv                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m23s
	  kube-system                 coredns-66bc5c9577-rhtg8                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m3s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 csi-hostpathplugin-8vnr8                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m23s
	  kube-system                 etcd-addons-028052                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m9s
	  kube-system                 kindnet-rvmds                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m4s
	  kube-system                 kube-apiserver-addons-028052                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m9s
	  kube-system                 kube-controller-manager-addons-028052        200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m9s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 kube-proxy-jrpnr                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 kube-scheduler-addons-028052                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m9s
	  kube-system                 metrics-server-85b7d694d7-2mwh2              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         4m2s
	  kube-system                 nvidia-device-plugin-daemonset-n659m         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m23s
	  kube-system                 registry-6b586f9694-6cvjm                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m3s
	  kube-system                 registry-creds-764b6fb674-zmx8t              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m3s
	  kube-system                 registry-proxy-kql6j                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m23s
	  kube-system                 snapshot-controller-7d9fbc56b8-jptd2         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m1s
	  kube-system                 snapshot-controller-7d9fbc56b8-vfr4b         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m1s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  local-path-storage          local-path-provisioner-648f6765c9-lhr8t      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-7tm8b               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     4m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m2s                   kube-proxy       
	  Normal  Starting                 4m14s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m14s (x8 over 4m14s)  kubelet          Node addons-028052 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m14s (x8 over 4m14s)  kubelet          Node addons-028052 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m14s (x8 over 4m14s)  kubelet          Node addons-028052 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m9s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m9s                   kubelet          Node addons-028052 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m9s                   kubelet          Node addons-028052 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m9s                   kubelet          Node addons-028052 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m5s                   node-controller  Node addons-028052 event: Registered Node addons-028052 in Controller
	  Normal  NodeReady                3m23s                  kubelet          Node addons-028052 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.099492] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028889] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.744944] kauditd_printk_skb: 47 callbacks suppressed
	[Dec10 05:46] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 7a e7 3d 79 07 a4 2a ee 7c da f3 33 08 00
	[  +1.032224] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 7a e7 3d 79 07 a4 2a ee 7c da f3 33 08 00
	[  +1.023853] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 7a e7 3d 79 07 a4 2a ee 7c da f3 33 08 00
	[  +1.023939] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 7a e7 3d 79 07 a4 2a ee 7c da f3 33 08 00
	[  +1.023886] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000028] ll header: 00000000: 7a e7 3d 79 07 a4 2a ee 7c da f3 33 08 00
	[  +1.023872] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 7a e7 3d 79 07 a4 2a ee 7c da f3 33 08 00
	[  +2.047757] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 7a e7 3d 79 07 a4 2a ee 7c da f3 33 08 00
	[  +4.031567] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 7a e7 3d 79 07 a4 2a ee 7c da f3 33 08 00
	[  +8.191127] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 7a e7 3d 79 07 a4 2a ee 7c da f3 33 08 00
	[ +16.382234] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 7a e7 3d 79 07 a4 2a ee 7c da f3 33 08 00
	[Dec10 05:47] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 7a e7 3d 79 07 a4 2a ee 7c da f3 33 08 00
	
	
	==> etcd [965d086a638c9808f443b112af7fab37ce3c8230ef95960da97133283a174896] <==
	{"level":"warn","ts":"2025-12-10T05:44:28.389289Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:44:28.396158Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:44:28.413700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:44:28.426945Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:44:28.434202Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:44:28.443074Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:44:28.449595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:44:28.457252Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:44:28.464960Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:44:28.471890Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:44:28.478949Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:44:28.485548Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:44:28.493170Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:44:28.515498Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:44:28.523962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:44:28.530177Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:44:28.575049Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:44:39.386027Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:44:39.392600Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:45:05.958777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:45:05.965597Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:45:05.979899Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:45:05.986327Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57022","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-10T05:45:30.786536Z","caller":"traceutil/trace.go:172","msg":"trace[1424425179] transaction","detail":"{read_only:false; response_revision:1035; number_of_response:1; }","duration":"102.513356ms","start":"2025-12-10T05:45:30.684000Z","end":"2025-12-10T05:45:30.786513Z","steps":["trace[1424425179] 'process raft request'  (duration: 20.966769ms)","trace[1424425179] 'compare'  (duration: 81.403526ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-10T05:45:44.918842Z","caller":"traceutil/trace.go:172","msg":"trace[1076135398] transaction","detail":"{read_only:false; response_revision:1166; number_of_response:1; }","duration":"108.142692ms","start":"2025-12-10T05:45:44.810681Z","end":"2025-12-10T05:45:44.918824Z","steps":["trace[1076135398] 'process raft request'  (duration: 108.026303ms)"],"step_count":1}
	
	
	==> gcp-auth [1ab046fa4ded9f206820086fb67bbe704ab6d1f08a9650b1827d72e28261c43e] <==
	2025/12/10 05:45:53 GCP Auth Webhook started!
	2025/12/10 05:45:55 Ready to marshal response ...
	2025/12/10 05:45:55 Ready to write response ...
	2025/12/10 05:45:55 Ready to marshal response ...
	2025/12/10 05:45:55 Ready to write response ...
	2025/12/10 05:45:56 Ready to marshal response ...
	2025/12/10 05:45:56 Ready to write response ...
	2025/12/10 05:46:06 Ready to marshal response ...
	2025/12/10 05:46:06 Ready to write response ...
	2025/12/10 05:46:06 Ready to marshal response ...
	2025/12/10 05:46:06 Ready to write response ...
	2025/12/10 05:46:13 Ready to marshal response ...
	2025/12/10 05:46:13 Ready to write response ...
	2025/12/10 05:46:14 Ready to marshal response ...
	2025/12/10 05:46:14 Ready to write response ...
	2025/12/10 05:46:14 Ready to marshal response ...
	2025/12/10 05:46:14 Ready to write response ...
	2025/12/10 05:46:25 Ready to marshal response ...
	2025/12/10 05:46:25 Ready to write response ...
	2025/12/10 05:46:49 Ready to marshal response ...
	2025/12/10 05:46:49 Ready to write response ...
	2025/12/10 05:48:38 Ready to marshal response ...
	2025/12/10 05:48:38 Ready to write response ...
	
	
	==> kernel <==
	 05:48:40 up 31 min,  0 user,  load average: 0.22, 0.57, 0.33
	Linux addons-028052 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [fbc11ef328020e6f9cbad908c90e044d4bb674441630aabf78830e7d07ac1671] <==
	I1210 05:46:37.850606       1 main.go:301] handling current node
	I1210 05:46:47.855628       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 05:46:47.855665       1 main.go:301] handling current node
	I1210 05:46:57.850934       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 05:46:57.850978       1 main.go:301] handling current node
	I1210 05:47:07.851266       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 05:47:07.851307       1 main.go:301] handling current node
	I1210 05:47:17.851266       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 05:47:17.851299       1 main.go:301] handling current node
	I1210 05:47:27.857509       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 05:47:27.857543       1 main.go:301] handling current node
	I1210 05:47:37.858696       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 05:47:37.858726       1 main.go:301] handling current node
	I1210 05:47:47.852180       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 05:47:47.852227       1 main.go:301] handling current node
	I1210 05:47:57.851339       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 05:47:57.851379       1 main.go:301] handling current node
	I1210 05:48:07.851032       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 05:48:07.851065       1 main.go:301] handling current node
	I1210 05:48:17.851409       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 05:48:17.851447       1 main.go:301] handling current node
	I1210 05:48:27.850616       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 05:48:27.850660       1 main.go:301] handling current node
	I1210 05:48:37.850679       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 05:48:37.850704       1 main.go:301] handling current node
	
	
	==> kube-apiserver [65e519df51c1d064d81c14c81e4eb34dfaf950890b576594d1ed96430518937a] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1210 05:45:38.504251       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.173.142:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.173.142:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.173.142:443: connect: connection refused" logger="UnhandledError"
	W1210 05:45:39.078449       1 handler_proxy.go:99] no RequestInfo found in the context
	E1210 05:45:39.078510       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1210 05:45:39.078529       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1210 05:45:39.079590       1 handler_proxy.go:99] no RequestInfo found in the context
	E1210 05:45:39.079667       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1210 05:45:39.079681       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1210 05:45:43.513616       1 handler_proxy.go:99] no RequestInfo found in the context
	E1210 05:45:43.513669       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1210 05:45:43.513813       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.173.142:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.173.142:443/apis/metrics.k8s.io/v1beta1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	I1210 05:45:43.522893       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1210 05:46:03.144624       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:59606: use of closed network connection
	E1210 05:46:03.293326       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:59652: use of closed network connection
	I1210 05:46:14.482774       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1210 05:46:14.671002       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.109.200"}
	I1210 05:46:32.447800       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1210 05:48:38.776808       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.108.18.117"}
	
	
	==> kube-controller-manager [f1f5e9bce84f7b19972c44f0a37d275e958d15c03c9fc7f5cafd80b0328b7b15] <==
	I1210 05:44:35.942442       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="addons-028052"
	I1210 05:44:35.942496       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1210 05:44:35.942532       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1210 05:44:35.943694       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1210 05:44:35.943716       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1210 05:44:35.943749       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1210 05:44:35.943808       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1210 05:44:35.943811       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1210 05:44:35.944924       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1210 05:44:35.946749       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1210 05:44:35.947867       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 05:44:35.949048       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1210 05:44:35.958448       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1210 05:44:35.963763       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1210 05:44:38.126021       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1210 05:45:05.951744       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 05:45:05.951936       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1210 05:45:05.951985       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1210 05:45:05.970770       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1210 05:45:05.974493       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1210 05:45:06.052903       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 05:45:06.075305       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1210 05:45:20.949849       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1210 05:45:36.058497       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 05:45:36.082969       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [9497319e6c1c192902153d2ab92d489d5b12e5477a82f9c3e5dc7a7cb90e690d] <==
	I1210 05:44:37.331795       1 server_linux.go:53] "Using iptables proxy"
	I1210 05:44:37.540010       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1210 05:44:37.640762       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1210 05:44:37.643652       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1210 05:44:37.643783       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 05:44:37.917604       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1210 05:44:37.921346       1 server_linux.go:132] "Using iptables Proxier"
	I1210 05:44:37.982807       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 05:44:37.989266       1 server.go:527] "Version info" version="v1.34.2"
	I1210 05:44:37.995646       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 05:44:37.998084       1 config.go:200] "Starting service config controller"
	I1210 05:44:37.998173       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 05:44:37.998215       1 config.go:106] "Starting endpoint slice config controller"
	I1210 05:44:37.998261       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 05:44:37.998291       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 05:44:37.998313       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 05:44:37.998497       1 config.go:309] "Starting node config controller"
	I1210 05:44:37.998532       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 05:44:37.999543       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 05:44:38.102620       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1210 05:44:38.102750       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1210 05:44:38.101550       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [0122c6e10b651e471c57d0ec13f92f8bc142cb60e5d24dfbe157c9afb9176abb] <==
	E1210 05:44:28.986810       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1210 05:44:28.987577       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1210 05:44:28.987602       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1210 05:44:28.987807       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1210 05:44:28.987825       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1210 05:44:28.987945       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1210 05:44:28.987965       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1210 05:44:28.988003       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1210 05:44:28.988057       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1210 05:44:28.988107       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1210 05:44:28.988168       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1210 05:44:28.988270       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1210 05:44:28.988314       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1210 05:44:28.988804       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1210 05:44:28.988868       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1210 05:44:28.989117       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1210 05:44:29.961220       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1210 05:44:29.977379       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1210 05:44:30.009370       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1210 05:44:30.047862       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1210 05:44:30.068990       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1210 05:44:30.112288       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1210 05:44:30.204830       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1210 05:44:30.440348       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1210 05:44:32.385109       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 10 05:46:50 addons-028052 kubelet[1302]: I1210 05:46:50.897014    1302 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/task-pv-pod-restore" podStartSLOduration=1.896988387 podStartE2EDuration="1.896988387s" podCreationTimestamp="2025-12-10 05:46:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 05:46:50.895704292 +0000 UTC m=+139.708635905" watchObservedRunningTime="2025-12-10 05:46:50.896988387 +0000 UTC m=+139.709920000"
	Dec 10 05:46:57 addons-028052 kubelet[1302]: I1210 05:46:57.907392    1302 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnqvf\" (UniqueName: \"kubernetes.io/projected/d6fd8501-2cfd-4fc0-bcb9-1abb21e7f069-kube-api-access-fnqvf\") pod \"d6fd8501-2cfd-4fc0-bcb9-1abb21e7f069\" (UID: \"d6fd8501-2cfd-4fc0-bcb9-1abb21e7f069\") "
	Dec 10 05:46:57 addons-028052 kubelet[1302]: I1210 05:46:57.907497    1302 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/d6fd8501-2cfd-4fc0-bcb9-1abb21e7f069-gcp-creds\") pod \"d6fd8501-2cfd-4fc0-bcb9-1abb21e7f069\" (UID: \"d6fd8501-2cfd-4fc0-bcb9-1abb21e7f069\") "
	Dec 10 05:46:57 addons-028052 kubelet[1302]: I1210 05:46:57.907566    1302 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6fd8501-2cfd-4fc0-bcb9-1abb21e7f069-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "d6fd8501-2cfd-4fc0-bcb9-1abb21e7f069" (UID: "d6fd8501-2cfd-4fc0-bcb9-1abb21e7f069"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Dec 10 05:46:57 addons-028052 kubelet[1302]: I1210 05:46:57.907595    1302 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"task-pv-storage\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^9e839f5b-d58b-11f0-8409-46f8a5b96b0c\") pod \"d6fd8501-2cfd-4fc0-bcb9-1abb21e7f069\" (UID: \"d6fd8501-2cfd-4fc0-bcb9-1abb21e7f069\") "
	Dec 10 05:46:57 addons-028052 kubelet[1302]: I1210 05:46:57.907701    1302 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/d6fd8501-2cfd-4fc0-bcb9-1abb21e7f069-gcp-creds\") on node \"addons-028052\" DevicePath \"\""
	Dec 10 05:46:57 addons-028052 kubelet[1302]: I1210 05:46:57.910195    1302 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6fd8501-2cfd-4fc0-bcb9-1abb21e7f069-kube-api-access-fnqvf" (OuterVolumeSpecName: "kube-api-access-fnqvf") pod "d6fd8501-2cfd-4fc0-bcb9-1abb21e7f069" (UID: "d6fd8501-2cfd-4fc0-bcb9-1abb21e7f069"). InnerVolumeSpecName "kube-api-access-fnqvf". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 10 05:46:57 addons-028052 kubelet[1302]: I1210 05:46:57.910419    1302 scope.go:117] "RemoveContainer" containerID="945aab9274b6b1fec20119ceb08961e4531e043ec1b241823d486bbc65fee109"
	Dec 10 05:46:57 addons-028052 kubelet[1302]: I1210 05:46:57.910911    1302 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^9e839f5b-d58b-11f0-8409-46f8a5b96b0c" (OuterVolumeSpecName: "task-pv-storage") pod "d6fd8501-2cfd-4fc0-bcb9-1abb21e7f069" (UID: "d6fd8501-2cfd-4fc0-bcb9-1abb21e7f069"). InnerVolumeSpecName "pvc-d4b896ab-b357-44af-9628-4bd426004666". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Dec 10 05:46:57 addons-028052 kubelet[1302]: I1210 05:46:57.921973    1302 scope.go:117] "RemoveContainer" containerID="945aab9274b6b1fec20119ceb08961e4531e043ec1b241823d486bbc65fee109"
	Dec 10 05:46:57 addons-028052 kubelet[1302]: E1210 05:46:57.922383    1302 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"945aab9274b6b1fec20119ceb08961e4531e043ec1b241823d486bbc65fee109\": container with ID starting with 945aab9274b6b1fec20119ceb08961e4531e043ec1b241823d486bbc65fee109 not found: ID does not exist" containerID="945aab9274b6b1fec20119ceb08961e4531e043ec1b241823d486bbc65fee109"
	Dec 10 05:46:57 addons-028052 kubelet[1302]: I1210 05:46:57.922431    1302 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"945aab9274b6b1fec20119ceb08961e4531e043ec1b241823d486bbc65fee109"} err="failed to get container status \"945aab9274b6b1fec20119ceb08961e4531e043ec1b241823d486bbc65fee109\": rpc error: code = NotFound desc = could not find container \"945aab9274b6b1fec20119ceb08961e4531e043ec1b241823d486bbc65fee109\": container with ID starting with 945aab9274b6b1fec20119ceb08961e4531e043ec1b241823d486bbc65fee109 not found: ID does not exist"
	Dec 10 05:46:58 addons-028052 kubelet[1302]: I1210 05:46:58.008353    1302 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-d4b896ab-b357-44af-9628-4bd426004666\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^9e839f5b-d58b-11f0-8409-46f8a5b96b0c\") on node \"addons-028052\" "
	Dec 10 05:46:58 addons-028052 kubelet[1302]: I1210 05:46:58.008388    1302 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fnqvf\" (UniqueName: \"kubernetes.io/projected/d6fd8501-2cfd-4fc0-bcb9-1abb21e7f069-kube-api-access-fnqvf\") on node \"addons-028052\" DevicePath \"\""
	Dec 10 05:46:58 addons-028052 kubelet[1302]: I1210 05:46:58.012642    1302 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-d4b896ab-b357-44af-9628-4bd426004666" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^9e839f5b-d58b-11f0-8409-46f8a5b96b0c") on node "addons-028052"
	Dec 10 05:46:58 addons-028052 kubelet[1302]: I1210 05:46:58.109522    1302 reconciler_common.go:299] "Volume detached for volume \"pvc-d4b896ab-b357-44af-9628-4bd426004666\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^9e839f5b-d58b-11f0-8409-46f8a5b96b0c\") on node \"addons-028052\" DevicePath \"\""
	Dec 10 05:46:58 addons-028052 kubelet[1302]: I1210 05:46:58.278660    1302 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-kql6j" secret="" err="secret \"gcp-auth\" not found"
	Dec 10 05:46:59 addons-028052 kubelet[1302]: I1210 05:46:59.282023    1302 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6fd8501-2cfd-4fc0-bcb9-1abb21e7f069" path="/var/lib/kubelet/pods/d6fd8501-2cfd-4fc0-bcb9-1abb21e7f069/volumes"
	Dec 10 05:47:10 addons-028052 kubelet[1302]: I1210 05:47:10.278682    1302 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-n659m" secret="" err="secret \"gcp-auth\" not found"
	Dec 10 05:47:54 addons-028052 kubelet[1302]: I1210 05:47:54.278320    1302 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-8nkkv" secret="" err="secret \"gcp-auth\" not found"
	Dec 10 05:48:06 addons-028052 kubelet[1302]: I1210 05:48:06.278915    1302 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-kql6j" secret="" err="secret \"gcp-auth\" not found"
	Dec 10 05:48:32 addons-028052 kubelet[1302]: I1210 05:48:32.279051    1302 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-n659m" secret="" err="secret \"gcp-auth\" not found"
	Dec 10 05:48:38 addons-028052 kubelet[1302]: I1210 05:48:38.862372    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/21da0661-2e1c-4944-9625-566d0c3fd747-gcp-creds\") pod \"hello-world-app-5d498dc89-6kjqg\" (UID: \"21da0661-2e1c-4944-9625-566d0c3fd747\") " pod="default/hello-world-app-5d498dc89-6kjqg"
	Dec 10 05:48:38 addons-028052 kubelet[1302]: I1210 05:48:38.862442    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmtql\" (UniqueName: \"kubernetes.io/projected/21da0661-2e1c-4944-9625-566d0c3fd747-kube-api-access-jmtql\") pod \"hello-world-app-5d498dc89-6kjqg\" (UID: \"21da0661-2e1c-4944-9625-566d0c3fd747\") " pod="default/hello-world-app-5d498dc89-6kjqg"
	Dec 10 05:48:40 addons-028052 kubelet[1302]: I1210 05:48:40.303004    1302 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-5d498dc89-6kjqg" podStartSLOduration=1.93654561 podStartE2EDuration="2.302982106s" podCreationTimestamp="2025-12-10 05:48:38 +0000 UTC" firstStartedPulling="2025-12-10 05:48:39.031801057 +0000 UTC m=+247.844732651" lastFinishedPulling="2025-12-10 05:48:39.398237551 +0000 UTC m=+248.211169147" observedRunningTime="2025-12-10 05:48:40.302111022 +0000 UTC m=+249.115042637" watchObservedRunningTime="2025-12-10 05:48:40.302982106 +0000 UTC m=+249.115913718"
	
	
	==> storage-provisioner [58125e9bcfadd161d0334430d2e81b4b585bb9e189e3a652088e6fdbc00cdb98] <==
	W1210 05:48:15.522059       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:48:17.525091       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:48:17.530281       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:48:19.533977       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:48:19.537939       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:48:21.541003       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:48:21.547233       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:48:23.550217       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:48:23.554254       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:48:25.557336       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:48:25.561223       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:48:27.564123       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:48:27.569082       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:48:29.572347       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:48:29.576158       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:48:31.579108       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:48:31.584907       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:48:33.587866       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:48:33.592672       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:48:35.595595       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:48:35.600487       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:48:37.603747       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:48:37.607703       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:48:39.612394       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:48:39.618696       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-028052 -n addons-028052
helpers_test.go:270: (dbg) Run:  kubectl --context addons-028052 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: ingress-nginx-admission-create-z5xgg ingress-nginx-admission-patch-297k4
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-028052 describe pod ingress-nginx-admission-create-z5xgg ingress-nginx-admission-patch-297k4
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-028052 describe pod ingress-nginx-admission-create-z5xgg ingress-nginx-admission-patch-297k4: exit status 1 (57.099532ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-z5xgg" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-297k4" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-028052 describe pod ingress-nginx-admission-create-z5xgg ingress-nginx-admission-patch-297k4: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-028052 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-028052 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (242.288304ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 05:48:41.285685   28416 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:48:41.286025   28416 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:48:41.286032   28416 out.go:374] Setting ErrFile to fd 2...
	I1210 05:48:41.286039   28416 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:48:41.286696   28416 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8832/.minikube/bin
	I1210 05:48:41.287022   28416 mustload.go:66] Loading cluster: addons-028052
	I1210 05:48:41.287335   28416 config.go:182] Loaded profile config "addons-028052": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 05:48:41.287355   28416 addons.go:622] checking whether the cluster is paused
	I1210 05:48:41.287445   28416 config.go:182] Loaded profile config "addons-028052": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 05:48:41.287457   28416 host.go:66] Checking if "addons-028052" exists ...
	I1210 05:48:41.287835   28416 cli_runner.go:164] Run: docker container inspect addons-028052 --format={{.State.Status}}
	I1210 05:48:41.306424   28416 ssh_runner.go:195] Run: systemctl --version
	I1210 05:48:41.306497   28416 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028052
	I1210 05:48:41.324225   28416 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/addons-028052/id_rsa Username:docker}
	I1210 05:48:41.419096   28416 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 05:48:41.419199   28416 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 05:48:41.448695   28416 cri.go:89] found id: "3a3efde58fa771a88945cc7c48610942c659c69b3aa4fb8309615494527caa17"
	I1210 05:48:41.448730   28416 cri.go:89] found id: "16d883ea0cc6779bde20ede57329324ccb3073fc4a4ace9d329105b630097e53"
	I1210 05:48:41.448737   28416 cri.go:89] found id: "736d6c57ec43c1049fc475cb75d66bd4e61af0f5fa34e42b665c70ba4390742c"
	I1210 05:48:41.448742   28416 cri.go:89] found id: "660e106c0ca888f87a50643d5adcd0d1151065c4341897cf2b65f1c18534f68f"
	I1210 05:48:41.448748   28416 cri.go:89] found id: "b77860e4ca7d8d9c02bcbed331e0cbb22323bb93c694b8969dae5e3caf82308b"
	I1210 05:48:41.448753   28416 cri.go:89] found id: "15bdf91e471254f93dee370bf1831f3912afc00e05382ad11815cbbab8f2e1d7"
	I1210 05:48:41.448757   28416 cri.go:89] found id: "b348e5c8e523a1f9eebbeccbb1a381248fcc876c68527ef07c501b958acbec62"
	I1210 05:48:41.448762   28416 cri.go:89] found id: "03c1319ba40adc6cc0c4630b22ba6b75c7514ebc2d7cf02eb7505833be94d7a7"
	I1210 05:48:41.448766   28416 cri.go:89] found id: "30e7ebcfff0650bcc7fdafd943ccd6f50a351909e0b9c33643660cfe8a925bfb"
	I1210 05:48:41.448774   28416 cri.go:89] found id: "1f872b473fd2ae84699c713f2ef8f124fd4fcdd418efbb37106de31bf37f116e"
	I1210 05:48:41.448777   28416 cri.go:89] found id: "304fa9c779484e5496a401ac38622fc781398b5378ffc456e3864b3d0825f120"
	I1210 05:48:41.448780   28416 cri.go:89] found id: "3d4ccc4d76ae4b3a4f2c820c2802b0218844b053079f83f8844177ffea9582be"
	I1210 05:48:41.448782   28416 cri.go:89] found id: "a0bbf399c11456bf767be1edadfa4ce06f450d80bdb74a4ff140d1658684ba30"
	I1210 05:48:41.448785   28416 cri.go:89] found id: "5f58fcc00134eb8d59a63529213019f5e50939e6fd4c584d6eff14ac2a6144e9"
	I1210 05:48:41.448789   28416 cri.go:89] found id: "dec533b105023287d9c5a2f8b2c9416ba56dda3bfc1421a5f53aab1805cf96be"
	I1210 05:48:41.448797   28416 cri.go:89] found id: "7c725f36dd3b4433100a50a43edc6ec082420363ce394e1342d7a178ca2f3ee5"
	I1210 05:48:41.448806   28416 cri.go:89] found id: "6ed5ed25f8d19e3ab10979fe0d41f814698164a6644627db3849c6e9209352d6"
	I1210 05:48:41.448813   28416 cri.go:89] found id: "9d1fa5291d10e03a9903b7e6298d010ed5ca423741104638ae3883dcb6a99dce"
	I1210 05:48:41.448819   28416 cri.go:89] found id: "58125e9bcfadd161d0334430d2e81b4b585bb9e189e3a652088e6fdbc00cdb98"
	I1210 05:48:41.448823   28416 cri.go:89] found id: "fbc11ef328020e6f9cbad908c90e044d4bb674441630aabf78830e7d07ac1671"
	I1210 05:48:41.448828   28416 cri.go:89] found id: "9497319e6c1c192902153d2ab92d489d5b12e5477a82f9c3e5dc7a7cb90e690d"
	I1210 05:48:41.448836   28416 cri.go:89] found id: "0122c6e10b651e471c57d0ec13f92f8bc142cb60e5d24dfbe157c9afb9176abb"
	I1210 05:48:41.448841   28416 cri.go:89] found id: "f1f5e9bce84f7b19972c44f0a37d275e958d15c03c9fc7f5cafd80b0328b7b15"
	I1210 05:48:41.448848   28416 cri.go:89] found id: "65e519df51c1d064d81c14c81e4eb34dfaf950890b576594d1ed96430518937a"
	I1210 05:48:41.448853   28416 cri.go:89] found id: "965d086a638c9808f443b112af7fab37ce3c8230ef95960da97133283a174896"
	I1210 05:48:41.448860   28416 cri.go:89] found id: ""
	I1210 05:48:41.448925   28416 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 05:48:41.463646   28416 out.go:203] 
	W1210 05:48:41.464840   28416 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T05:48:41Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T05:48:41Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 05:48:41.464857   28416 out.go:285] * 
	* 
	W1210 05:48:41.468079   28416 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 05:48:41.469614   28416 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress-dns addon: args "out/minikube-linux-amd64 -p addons-028052 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-028052 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-028052 addons disable ingress --alsologtostderr -v=1: exit status 11 (246.208372ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 05:48:41.527808   28480 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:48:41.528331   28480 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:48:41.528353   28480 out.go:374] Setting ErrFile to fd 2...
	I1210 05:48:41.528361   28480 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:48:41.528842   28480 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8832/.minikube/bin
	I1210 05:48:41.529492   28480 mustload.go:66] Loading cluster: addons-028052
	I1210 05:48:41.529861   28480 config.go:182] Loaded profile config "addons-028052": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 05:48:41.529882   28480 addons.go:622] checking whether the cluster is paused
	I1210 05:48:41.529965   28480 config.go:182] Loaded profile config "addons-028052": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 05:48:41.529977   28480 host.go:66] Checking if "addons-028052" exists ...
	I1210 05:48:41.530326   28480 cli_runner.go:164] Run: docker container inspect addons-028052 --format={{.State.Status}}
	I1210 05:48:41.549298   28480 ssh_runner.go:195] Run: systemctl --version
	I1210 05:48:41.549369   28480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028052
	I1210 05:48:41.567895   28480 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/addons-028052/id_rsa Username:docker}
	I1210 05:48:41.662379   28480 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 05:48:41.662461   28480 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 05:48:41.693011   28480 cri.go:89] found id: "3a3efde58fa771a88945cc7c48610942c659c69b3aa4fb8309615494527caa17"
	I1210 05:48:41.693035   28480 cri.go:89] found id: "16d883ea0cc6779bde20ede57329324ccb3073fc4a4ace9d329105b630097e53"
	I1210 05:48:41.693041   28480 cri.go:89] found id: "736d6c57ec43c1049fc475cb75d66bd4e61af0f5fa34e42b665c70ba4390742c"
	I1210 05:48:41.693046   28480 cri.go:89] found id: "660e106c0ca888f87a50643d5adcd0d1151065c4341897cf2b65f1c18534f68f"
	I1210 05:48:41.693049   28480 cri.go:89] found id: "b77860e4ca7d8d9c02bcbed331e0cbb22323bb93c694b8969dae5e3caf82308b"
	I1210 05:48:41.693052   28480 cri.go:89] found id: "15bdf91e471254f93dee370bf1831f3912afc00e05382ad11815cbbab8f2e1d7"
	I1210 05:48:41.693055   28480 cri.go:89] found id: "b348e5c8e523a1f9eebbeccbb1a381248fcc876c68527ef07c501b958acbec62"
	I1210 05:48:41.693058   28480 cri.go:89] found id: "03c1319ba40adc6cc0c4630b22ba6b75c7514ebc2d7cf02eb7505833be94d7a7"
	I1210 05:48:41.693061   28480 cri.go:89] found id: "30e7ebcfff0650bcc7fdafd943ccd6f50a351909e0b9c33643660cfe8a925bfb"
	I1210 05:48:41.693067   28480 cri.go:89] found id: "1f872b473fd2ae84699c713f2ef8f124fd4fcdd418efbb37106de31bf37f116e"
	I1210 05:48:41.693070   28480 cri.go:89] found id: "304fa9c779484e5496a401ac38622fc781398b5378ffc456e3864b3d0825f120"
	I1210 05:48:41.693079   28480 cri.go:89] found id: "3d4ccc4d76ae4b3a4f2c820c2802b0218844b053079f83f8844177ffea9582be"
	I1210 05:48:41.693085   28480 cri.go:89] found id: "a0bbf399c11456bf767be1edadfa4ce06f450d80bdb74a4ff140d1658684ba30"
	I1210 05:48:41.693087   28480 cri.go:89] found id: "5f58fcc00134eb8d59a63529213019f5e50939e6fd4c584d6eff14ac2a6144e9"
	I1210 05:48:41.693090   28480 cri.go:89] found id: "dec533b105023287d9c5a2f8b2c9416ba56dda3bfc1421a5f53aab1805cf96be"
	I1210 05:48:41.693094   28480 cri.go:89] found id: "7c725f36dd3b4433100a50a43edc6ec082420363ce394e1342d7a178ca2f3ee5"
	I1210 05:48:41.693097   28480 cri.go:89] found id: "6ed5ed25f8d19e3ab10979fe0d41f814698164a6644627db3849c6e9209352d6"
	I1210 05:48:41.693101   28480 cri.go:89] found id: "9d1fa5291d10e03a9903b7e6298d010ed5ca423741104638ae3883dcb6a99dce"
	I1210 05:48:41.693104   28480 cri.go:89] found id: "58125e9bcfadd161d0334430d2e81b4b585bb9e189e3a652088e6fdbc00cdb98"
	I1210 05:48:41.693106   28480 cri.go:89] found id: "fbc11ef328020e6f9cbad908c90e044d4bb674441630aabf78830e7d07ac1671"
	I1210 05:48:41.693109   28480 cri.go:89] found id: "9497319e6c1c192902153d2ab92d489d5b12e5477a82f9c3e5dc7a7cb90e690d"
	I1210 05:48:41.693111   28480 cri.go:89] found id: "0122c6e10b651e471c57d0ec13f92f8bc142cb60e5d24dfbe157c9afb9176abb"
	I1210 05:48:41.693114   28480 cri.go:89] found id: "f1f5e9bce84f7b19972c44f0a37d275e958d15c03c9fc7f5cafd80b0328b7b15"
	I1210 05:48:41.693117   28480 cri.go:89] found id: "65e519df51c1d064d81c14c81e4eb34dfaf950890b576594d1ed96430518937a"
	I1210 05:48:41.693119   28480 cri.go:89] found id: "965d086a638c9808f443b112af7fab37ce3c8230ef95960da97133283a174896"
	I1210 05:48:41.693122   28480 cri.go:89] found id: ""
	I1210 05:48:41.693174   28480 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 05:48:41.707809   28480 out.go:203] 
	W1210 05:48:41.709139   28480 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T05:48:41Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T05:48:41Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 05:48:41.709171   28480 out.go:285] * 
	* 
	W1210 05:48:41.714663   28480 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 05:48:41.716424   28480 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress addon: args "out/minikube-linux-amd64 -p addons-028052 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (147.49s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.25s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-t97f6" [771890b7-6b3d-4979-b5fe-e57a1f6734ee] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003819264s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-028052 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-028052 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (248.373743ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 05:46:25.308525   25481 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:46:25.308829   25481 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:46:25.308837   25481 out.go:374] Setting ErrFile to fd 2...
	I1210 05:46:25.308843   25481 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:46:25.309190   25481 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8832/.minikube/bin
	I1210 05:46:25.309521   25481 mustload.go:66] Loading cluster: addons-028052
	I1210 05:46:25.309858   25481 config.go:182] Loaded profile config "addons-028052": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 05:46:25.309881   25481 addons.go:622] checking whether the cluster is paused
	I1210 05:46:25.309981   25481 config.go:182] Loaded profile config "addons-028052": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 05:46:25.309995   25481 host.go:66] Checking if "addons-028052" exists ...
	I1210 05:46:25.310390   25481 cli_runner.go:164] Run: docker container inspect addons-028052 --format={{.State.Status}}
	I1210 05:46:25.331326   25481 ssh_runner.go:195] Run: systemctl --version
	I1210 05:46:25.331390   25481 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028052
	I1210 05:46:25.351579   25481 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/addons-028052/id_rsa Username:docker}
	I1210 05:46:25.447192   25481 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 05:46:25.447270   25481 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 05:46:25.476521   25481 cri.go:89] found id: "3a3efde58fa771a88945cc7c48610942c659c69b3aa4fb8309615494527caa17"
	I1210 05:46:25.476546   25481 cri.go:89] found id: "16d883ea0cc6779bde20ede57329324ccb3073fc4a4ace9d329105b630097e53"
	I1210 05:46:25.476550   25481 cri.go:89] found id: "736d6c57ec43c1049fc475cb75d66bd4e61af0f5fa34e42b665c70ba4390742c"
	I1210 05:46:25.476554   25481 cri.go:89] found id: "660e106c0ca888f87a50643d5adcd0d1151065c4341897cf2b65f1c18534f68f"
	I1210 05:46:25.476557   25481 cri.go:89] found id: "b77860e4ca7d8d9c02bcbed331e0cbb22323bb93c694b8969dae5e3caf82308b"
	I1210 05:46:25.476561   25481 cri.go:89] found id: "15bdf91e471254f93dee370bf1831f3912afc00e05382ad11815cbbab8f2e1d7"
	I1210 05:46:25.476564   25481 cri.go:89] found id: "b348e5c8e523a1f9eebbeccbb1a381248fcc876c68527ef07c501b958acbec62"
	I1210 05:46:25.476566   25481 cri.go:89] found id: "03c1319ba40adc6cc0c4630b22ba6b75c7514ebc2d7cf02eb7505833be94d7a7"
	I1210 05:46:25.476569   25481 cri.go:89] found id: "30e7ebcfff0650bcc7fdafd943ccd6f50a351909e0b9c33643660cfe8a925bfb"
	I1210 05:46:25.476585   25481 cri.go:89] found id: "1f872b473fd2ae84699c713f2ef8f124fd4fcdd418efbb37106de31bf37f116e"
	I1210 05:46:25.476590   25481 cri.go:89] found id: "304fa9c779484e5496a401ac38622fc781398b5378ffc456e3864b3d0825f120"
	I1210 05:46:25.476594   25481 cri.go:89] found id: "3d4ccc4d76ae4b3a4f2c820c2802b0218844b053079f83f8844177ffea9582be"
	I1210 05:46:25.476599   25481 cri.go:89] found id: "a0bbf399c11456bf767be1edadfa4ce06f450d80bdb74a4ff140d1658684ba30"
	I1210 05:46:25.476603   25481 cri.go:89] found id: "5f58fcc00134eb8d59a63529213019f5e50939e6fd4c584d6eff14ac2a6144e9"
	I1210 05:46:25.476611   25481 cri.go:89] found id: "dec533b105023287d9c5a2f8b2c9416ba56dda3bfc1421a5f53aab1805cf96be"
	I1210 05:46:25.476622   25481 cri.go:89] found id: "7c725f36dd3b4433100a50a43edc6ec082420363ce394e1342d7a178ca2f3ee5"
	I1210 05:46:25.476630   25481 cri.go:89] found id: "6ed5ed25f8d19e3ab10979fe0d41f814698164a6644627db3849c6e9209352d6"
	I1210 05:46:25.476636   25481 cri.go:89] found id: "9d1fa5291d10e03a9903b7e6298d010ed5ca423741104638ae3883dcb6a99dce"
	I1210 05:46:25.476639   25481 cri.go:89] found id: "58125e9bcfadd161d0334430d2e81b4b585bb9e189e3a652088e6fdbc00cdb98"
	I1210 05:46:25.476641   25481 cri.go:89] found id: "fbc11ef328020e6f9cbad908c90e044d4bb674441630aabf78830e7d07ac1671"
	I1210 05:46:25.476644   25481 cri.go:89] found id: "9497319e6c1c192902153d2ab92d489d5b12e5477a82f9c3e5dc7a7cb90e690d"
	I1210 05:46:25.476647   25481 cri.go:89] found id: "0122c6e10b651e471c57d0ec13f92f8bc142cb60e5d24dfbe157c9afb9176abb"
	I1210 05:46:25.476650   25481 cri.go:89] found id: "f1f5e9bce84f7b19972c44f0a37d275e958d15c03c9fc7f5cafd80b0328b7b15"
	I1210 05:46:25.476653   25481 cri.go:89] found id: "65e519df51c1d064d81c14c81e4eb34dfaf950890b576594d1ed96430518937a"
	I1210 05:46:25.476656   25481 cri.go:89] found id: "965d086a638c9808f443b112af7fab37ce3c8230ef95960da97133283a174896"
	I1210 05:46:25.476659   25481 cri.go:89] found id: ""
	I1210 05:46:25.476715   25481 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 05:46:25.490174   25481 out.go:203] 
	W1210 05:46:25.491463   25481 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T05:46:25Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T05:46:25Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 05:46:25.491499   25481 out.go:285] * 
	* 
	W1210 05:46:25.494638   25481 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 05:46:25.495988   25481 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 -p addons-028052 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (6.25s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.33s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 2.596004ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
I1210 05:46:14.089691   12374 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1210 05:46:14.089712   12374 kapi.go:107] duration metric: took 3.243732ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
helpers_test.go:353: "metrics-server-85b7d694d7-2mwh2" [dc1a0f7c-3439-4171-b8fe-ee86c125d8ee] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003285094s
addons_test.go:465: (dbg) Run:  kubectl --context addons-028052 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-028052 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-028052 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (261.378713ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 05:46:19.208769   25000 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:46:19.209057   25000 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:46:19.209068   25000 out.go:374] Setting ErrFile to fd 2...
	I1210 05:46:19.209072   25000 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:46:19.209264   25000 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8832/.minikube/bin
	I1210 05:46:19.209543   25000 mustload.go:66] Loading cluster: addons-028052
	I1210 05:46:19.209914   25000 config.go:182] Loaded profile config "addons-028052": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 05:46:19.209939   25000 addons.go:622] checking whether the cluster is paused
	I1210 05:46:19.210057   25000 config.go:182] Loaded profile config "addons-028052": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 05:46:19.210106   25000 host.go:66] Checking if "addons-028052" exists ...
	I1210 05:46:19.210600   25000 cli_runner.go:164] Run: docker container inspect addons-028052 --format={{.State.Status}}
	I1210 05:46:19.233780   25000 ssh_runner.go:195] Run: systemctl --version
	I1210 05:46:19.233844   25000 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028052
	I1210 05:46:19.257317   25000 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/addons-028052/id_rsa Username:docker}
	I1210 05:46:19.355454   25000 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 05:46:19.355554   25000 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 05:46:19.386390   25000 cri.go:89] found id: "16d883ea0cc6779bde20ede57329324ccb3073fc4a4ace9d329105b630097e53"
	I1210 05:46:19.386408   25000 cri.go:89] found id: "736d6c57ec43c1049fc475cb75d66bd4e61af0f5fa34e42b665c70ba4390742c"
	I1210 05:46:19.386412   25000 cri.go:89] found id: "660e106c0ca888f87a50643d5adcd0d1151065c4341897cf2b65f1c18534f68f"
	I1210 05:46:19.386416   25000 cri.go:89] found id: "b77860e4ca7d8d9c02bcbed331e0cbb22323bb93c694b8969dae5e3caf82308b"
	I1210 05:46:19.386419   25000 cri.go:89] found id: "15bdf91e471254f93dee370bf1831f3912afc00e05382ad11815cbbab8f2e1d7"
	I1210 05:46:19.386429   25000 cri.go:89] found id: "b348e5c8e523a1f9eebbeccbb1a381248fcc876c68527ef07c501b958acbec62"
	I1210 05:46:19.386433   25000 cri.go:89] found id: "03c1319ba40adc6cc0c4630b22ba6b75c7514ebc2d7cf02eb7505833be94d7a7"
	I1210 05:46:19.386435   25000 cri.go:89] found id: "30e7ebcfff0650bcc7fdafd943ccd6f50a351909e0b9c33643660cfe8a925bfb"
	I1210 05:46:19.386438   25000 cri.go:89] found id: "1f872b473fd2ae84699c713f2ef8f124fd4fcdd418efbb37106de31bf37f116e"
	I1210 05:46:19.386448   25000 cri.go:89] found id: "304fa9c779484e5496a401ac38622fc781398b5378ffc456e3864b3d0825f120"
	I1210 05:46:19.386451   25000 cri.go:89] found id: "3d4ccc4d76ae4b3a4f2c820c2802b0218844b053079f83f8844177ffea9582be"
	I1210 05:46:19.386453   25000 cri.go:89] found id: "a0bbf399c11456bf767be1edadfa4ce06f450d80bdb74a4ff140d1658684ba30"
	I1210 05:46:19.386456   25000 cri.go:89] found id: "5f58fcc00134eb8d59a63529213019f5e50939e6fd4c584d6eff14ac2a6144e9"
	I1210 05:46:19.386458   25000 cri.go:89] found id: "dec533b105023287d9c5a2f8b2c9416ba56dda3bfc1421a5f53aab1805cf96be"
	I1210 05:46:19.386461   25000 cri.go:89] found id: "7c725f36dd3b4433100a50a43edc6ec082420363ce394e1342d7a178ca2f3ee5"
	I1210 05:46:19.386482   25000 cri.go:89] found id: "6ed5ed25f8d19e3ab10979fe0d41f814698164a6644627db3849c6e9209352d6"
	I1210 05:46:19.386487   25000 cri.go:89] found id: "9d1fa5291d10e03a9903b7e6298d010ed5ca423741104638ae3883dcb6a99dce"
	I1210 05:46:19.386493   25000 cri.go:89] found id: "58125e9bcfadd161d0334430d2e81b4b585bb9e189e3a652088e6fdbc00cdb98"
	I1210 05:46:19.386498   25000 cri.go:89] found id: "fbc11ef328020e6f9cbad908c90e044d4bb674441630aabf78830e7d07ac1671"
	I1210 05:46:19.386502   25000 cri.go:89] found id: "9497319e6c1c192902153d2ab92d489d5b12e5477a82f9c3e5dc7a7cb90e690d"
	I1210 05:46:19.386506   25000 cri.go:89] found id: "0122c6e10b651e471c57d0ec13f92f8bc142cb60e5d24dfbe157c9afb9176abb"
	I1210 05:46:19.386509   25000 cri.go:89] found id: "f1f5e9bce84f7b19972c44f0a37d275e958d15c03c9fc7f5cafd80b0328b7b15"
	I1210 05:46:19.386511   25000 cri.go:89] found id: "65e519df51c1d064d81c14c81e4eb34dfaf950890b576594d1ed96430518937a"
	I1210 05:46:19.386514   25000 cri.go:89] found id: "965d086a638c9808f443b112af7fab37ce3c8230ef95960da97133283a174896"
	I1210 05:46:19.386528   25000 cri.go:89] found id: ""
	I1210 05:46:19.386570   25000 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 05:46:19.401173   25000 out.go:203] 
	W1210 05:46:19.402287   25000 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T05:46:19Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T05:46:19Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 05:46:19.402306   25000 out.go:285] * 
	* 
	W1210 05:46:19.405293   25000 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 05:46:19.406499   25000 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-028052 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.33s)

                                                
                                    
x
+
TestAddons/parallel/CSI (44.97s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1210 05:46:14.086500   12374 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 3.25425ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-028052 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-028052 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-028052 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-028052 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-028052 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-028052 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-028052 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-028052 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-028052 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-028052 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-028052 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-028052 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-028052 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-028052 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [835d07a2-5407-4bce-b2e3-a611f80dfa86] Pending
helpers_test.go:353: "task-pv-pod" [835d07a2-5407-4bce-b2e3-a611f80dfa86] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.0025944s
addons_test.go:574: (dbg) Run:  kubectl --context addons-028052 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-028052 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-028052 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-028052 delete pod task-pv-pod
addons_test.go:590: (dbg) Run:  kubectl --context addons-028052 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-028052 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-028052 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-028052 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-028052 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-028052 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-028052 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-028052 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-028052 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-028052 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-028052 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-028052 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-028052 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-028052 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-028052 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-028052 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-028052 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-028052 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-028052 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [d6fd8501-2cfd-4fc0-bcb9-1abb21e7f069] Pending
helpers_test.go:353: "task-pv-pod-restore" [d6fd8501-2cfd-4fc0-bcb9-1abb21e7f069] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod-restore" [d6fd8501-2cfd-4fc0-bcb9-1abb21e7f069] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003776715s
addons_test.go:616: (dbg) Run:  kubectl --context addons-028052 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-028052 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-028052 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-028052 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-028052 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (247.254908ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 05:46:58.608464   26294 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:46:58.608779   26294 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:46:58.608790   26294 out.go:374] Setting ErrFile to fd 2...
	I1210 05:46:58.608794   26294 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:46:58.608989   26294 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8832/.minikube/bin
	I1210 05:46:58.609251   26294 mustload.go:66] Loading cluster: addons-028052
	I1210 05:46:58.609630   26294 config.go:182] Loaded profile config "addons-028052": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 05:46:58.609655   26294 addons.go:622] checking whether the cluster is paused
	I1210 05:46:58.609740   26294 config.go:182] Loaded profile config "addons-028052": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 05:46:58.609752   26294 host.go:66] Checking if "addons-028052" exists ...
	I1210 05:46:58.610186   26294 cli_runner.go:164] Run: docker container inspect addons-028052 --format={{.State.Status}}
	I1210 05:46:58.629482   26294 ssh_runner.go:195] Run: systemctl --version
	I1210 05:46:58.629537   26294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028052
	I1210 05:46:58.650083   26294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/addons-028052/id_rsa Username:docker}
	I1210 05:46:58.747098   26294 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 05:46:58.747249   26294 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 05:46:58.777115   26294 cri.go:89] found id: "3a3efde58fa771a88945cc7c48610942c659c69b3aa4fb8309615494527caa17"
	I1210 05:46:58.777153   26294 cri.go:89] found id: "16d883ea0cc6779bde20ede57329324ccb3073fc4a4ace9d329105b630097e53"
	I1210 05:46:58.777157   26294 cri.go:89] found id: "736d6c57ec43c1049fc475cb75d66bd4e61af0f5fa34e42b665c70ba4390742c"
	I1210 05:46:58.777160   26294 cri.go:89] found id: "660e106c0ca888f87a50643d5adcd0d1151065c4341897cf2b65f1c18534f68f"
	I1210 05:46:58.777164   26294 cri.go:89] found id: "b77860e4ca7d8d9c02bcbed331e0cbb22323bb93c694b8969dae5e3caf82308b"
	I1210 05:46:58.777169   26294 cri.go:89] found id: "15bdf91e471254f93dee370bf1831f3912afc00e05382ad11815cbbab8f2e1d7"
	I1210 05:46:58.777172   26294 cri.go:89] found id: "b348e5c8e523a1f9eebbeccbb1a381248fcc876c68527ef07c501b958acbec62"
	I1210 05:46:58.777175   26294 cri.go:89] found id: "03c1319ba40adc6cc0c4630b22ba6b75c7514ebc2d7cf02eb7505833be94d7a7"
	I1210 05:46:58.777178   26294 cri.go:89] found id: "30e7ebcfff0650bcc7fdafd943ccd6f50a351909e0b9c33643660cfe8a925bfb"
	I1210 05:46:58.777195   26294 cri.go:89] found id: "1f872b473fd2ae84699c713f2ef8f124fd4fcdd418efbb37106de31bf37f116e"
	I1210 05:46:58.777199   26294 cri.go:89] found id: "304fa9c779484e5496a401ac38622fc781398b5378ffc456e3864b3d0825f120"
	I1210 05:46:58.777203   26294 cri.go:89] found id: "3d4ccc4d76ae4b3a4f2c820c2802b0218844b053079f83f8844177ffea9582be"
	I1210 05:46:58.777207   26294 cri.go:89] found id: "a0bbf399c11456bf767be1edadfa4ce06f450d80bdb74a4ff140d1658684ba30"
	I1210 05:46:58.777211   26294 cri.go:89] found id: "5f58fcc00134eb8d59a63529213019f5e50939e6fd4c584d6eff14ac2a6144e9"
	I1210 05:46:58.777215   26294 cri.go:89] found id: "dec533b105023287d9c5a2f8b2c9416ba56dda3bfc1421a5f53aab1805cf96be"
	I1210 05:46:58.777235   26294 cri.go:89] found id: "7c725f36dd3b4433100a50a43edc6ec082420363ce394e1342d7a178ca2f3ee5"
	I1210 05:46:58.777243   26294 cri.go:89] found id: "6ed5ed25f8d19e3ab10979fe0d41f814698164a6644627db3849c6e9209352d6"
	I1210 05:46:58.777247   26294 cri.go:89] found id: "9d1fa5291d10e03a9903b7e6298d010ed5ca423741104638ae3883dcb6a99dce"
	I1210 05:46:58.777250   26294 cri.go:89] found id: "58125e9bcfadd161d0334430d2e81b4b585bb9e189e3a652088e6fdbc00cdb98"
	I1210 05:46:58.777253   26294 cri.go:89] found id: "fbc11ef328020e6f9cbad908c90e044d4bb674441630aabf78830e7d07ac1671"
	I1210 05:46:58.777256   26294 cri.go:89] found id: "9497319e6c1c192902153d2ab92d489d5b12e5477a82f9c3e5dc7a7cb90e690d"
	I1210 05:46:58.777258   26294 cri.go:89] found id: "0122c6e10b651e471c57d0ec13f92f8bc142cb60e5d24dfbe157c9afb9176abb"
	I1210 05:46:58.777261   26294 cri.go:89] found id: "f1f5e9bce84f7b19972c44f0a37d275e958d15c03c9fc7f5cafd80b0328b7b15"
	I1210 05:46:58.777264   26294 cri.go:89] found id: "65e519df51c1d064d81c14c81e4eb34dfaf950890b576594d1ed96430518937a"
	I1210 05:46:58.777266   26294 cri.go:89] found id: "965d086a638c9808f443b112af7fab37ce3c8230ef95960da97133283a174896"
	I1210 05:46:58.777269   26294 cri.go:89] found id: ""
	I1210 05:46:58.777317   26294 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 05:46:58.791889   26294 out.go:203] 
	W1210 05:46:58.793395   26294 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T05:46:58Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T05:46:58Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 05:46:58.793431   26294 out.go:285] * 
	* 
	W1210 05:46:58.796810   26294 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 05:46:58.798307   26294 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volumesnapshots addon: args "out/minikube-linux-amd64 -p addons-028052 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-028052 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-028052 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (247.682016ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 05:46:58.858521   26372 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:46:58.858703   26372 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:46:58.858716   26372 out.go:374] Setting ErrFile to fd 2...
	I1210 05:46:58.858723   26372 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:46:58.859012   26372 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8832/.minikube/bin
	I1210 05:46:58.859357   26372 mustload.go:66] Loading cluster: addons-028052
	I1210 05:46:58.859898   26372 config.go:182] Loaded profile config "addons-028052": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 05:46:58.859928   26372 addons.go:622] checking whether the cluster is paused
	I1210 05:46:58.860064   26372 config.go:182] Loaded profile config "addons-028052": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 05:46:58.860081   26372 host.go:66] Checking if "addons-028052" exists ...
	I1210 05:46:58.860618   26372 cli_runner.go:164] Run: docker container inspect addons-028052 --format={{.State.Status}}
	I1210 05:46:58.878623   26372 ssh_runner.go:195] Run: systemctl --version
	I1210 05:46:58.878678   26372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028052
	I1210 05:46:58.897992   26372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/addons-028052/id_rsa Username:docker}
	I1210 05:46:58.994091   26372 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 05:46:58.994148   26372 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 05:46:59.024731   26372 cri.go:89] found id: "3a3efde58fa771a88945cc7c48610942c659c69b3aa4fb8309615494527caa17"
	I1210 05:46:59.024754   26372 cri.go:89] found id: "16d883ea0cc6779bde20ede57329324ccb3073fc4a4ace9d329105b630097e53"
	I1210 05:46:59.024758   26372 cri.go:89] found id: "736d6c57ec43c1049fc475cb75d66bd4e61af0f5fa34e42b665c70ba4390742c"
	I1210 05:46:59.024761   26372 cri.go:89] found id: "660e106c0ca888f87a50643d5adcd0d1151065c4341897cf2b65f1c18534f68f"
	I1210 05:46:59.024766   26372 cri.go:89] found id: "b77860e4ca7d8d9c02bcbed331e0cbb22323bb93c694b8969dae5e3caf82308b"
	I1210 05:46:59.024771   26372 cri.go:89] found id: "15bdf91e471254f93dee370bf1831f3912afc00e05382ad11815cbbab8f2e1d7"
	I1210 05:46:59.024775   26372 cri.go:89] found id: "b348e5c8e523a1f9eebbeccbb1a381248fcc876c68527ef07c501b958acbec62"
	I1210 05:46:59.024780   26372 cri.go:89] found id: "03c1319ba40adc6cc0c4630b22ba6b75c7514ebc2d7cf02eb7505833be94d7a7"
	I1210 05:46:59.024786   26372 cri.go:89] found id: "30e7ebcfff0650bcc7fdafd943ccd6f50a351909e0b9c33643660cfe8a925bfb"
	I1210 05:46:59.024793   26372 cri.go:89] found id: "1f872b473fd2ae84699c713f2ef8f124fd4fcdd418efbb37106de31bf37f116e"
	I1210 05:46:59.024799   26372 cri.go:89] found id: "304fa9c779484e5496a401ac38622fc781398b5378ffc456e3864b3d0825f120"
	I1210 05:46:59.024803   26372 cri.go:89] found id: "3d4ccc4d76ae4b3a4f2c820c2802b0218844b053079f83f8844177ffea9582be"
	I1210 05:46:59.024813   26372 cri.go:89] found id: "a0bbf399c11456bf767be1edadfa4ce06f450d80bdb74a4ff140d1658684ba30"
	I1210 05:46:59.024818   26372 cri.go:89] found id: "5f58fcc00134eb8d59a63529213019f5e50939e6fd4c584d6eff14ac2a6144e9"
	I1210 05:46:59.024825   26372 cri.go:89] found id: "dec533b105023287d9c5a2f8b2c9416ba56dda3bfc1421a5f53aab1805cf96be"
	I1210 05:46:59.024841   26372 cri.go:89] found id: "7c725f36dd3b4433100a50a43edc6ec082420363ce394e1342d7a178ca2f3ee5"
	I1210 05:46:59.024850   26372 cri.go:89] found id: "6ed5ed25f8d19e3ab10979fe0d41f814698164a6644627db3849c6e9209352d6"
	I1210 05:46:59.024856   26372 cri.go:89] found id: "9d1fa5291d10e03a9903b7e6298d010ed5ca423741104638ae3883dcb6a99dce"
	I1210 05:46:59.024861   26372 cri.go:89] found id: "58125e9bcfadd161d0334430d2e81b4b585bb9e189e3a652088e6fdbc00cdb98"
	I1210 05:46:59.024865   26372 cri.go:89] found id: "fbc11ef328020e6f9cbad908c90e044d4bb674441630aabf78830e7d07ac1671"
	I1210 05:46:59.024874   26372 cri.go:89] found id: "9497319e6c1c192902153d2ab92d489d5b12e5477a82f9c3e5dc7a7cb90e690d"
	I1210 05:46:59.024883   26372 cri.go:89] found id: "0122c6e10b651e471c57d0ec13f92f8bc142cb60e5d24dfbe157c9afb9176abb"
	I1210 05:46:59.024887   26372 cri.go:89] found id: "f1f5e9bce84f7b19972c44f0a37d275e958d15c03c9fc7f5cafd80b0328b7b15"
	I1210 05:46:59.024895   26372 cri.go:89] found id: "65e519df51c1d064d81c14c81e4eb34dfaf950890b576594d1ed96430518937a"
	I1210 05:46:59.024899   26372 cri.go:89] found id: "965d086a638c9808f443b112af7fab37ce3c8230ef95960da97133283a174896"
	I1210 05:46:59.024904   26372 cri.go:89] found id: ""
	I1210 05:46:59.024947   26372 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 05:46:59.039662   26372 out.go:203] 
	W1210 05:46:59.041014   26372 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T05:46:59Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T05:46:59Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 05:46:59.041042   26372 out.go:285] * 
	* 
	W1210 05:46:59.044188   26372 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 05:46:59.045716   26372 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-amd64 -p addons-028052 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (44.97s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.53s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-028052 --alsologtostderr -v=1
addons_test.go:810: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-028052 --alsologtostderr -v=1: exit status 11 (244.036205ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 05:46:03.593452   22347 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:46:03.593767   22347 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:46:03.593778   22347 out.go:374] Setting ErrFile to fd 2...
	I1210 05:46:03.593782   22347 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:46:03.593990   22347 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8832/.minikube/bin
	I1210 05:46:03.594255   22347 mustload.go:66] Loading cluster: addons-028052
	I1210 05:46:03.594646   22347 config.go:182] Loaded profile config "addons-028052": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 05:46:03.594667   22347 addons.go:622] checking whether the cluster is paused
	I1210 05:46:03.594754   22347 config.go:182] Loaded profile config "addons-028052": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 05:46:03.594767   22347 host.go:66] Checking if "addons-028052" exists ...
	I1210 05:46:03.595145   22347 cli_runner.go:164] Run: docker container inspect addons-028052 --format={{.State.Status}}
	I1210 05:46:03.613260   22347 ssh_runner.go:195] Run: systemctl --version
	I1210 05:46:03.613303   22347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028052
	I1210 05:46:03.632450   22347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/addons-028052/id_rsa Username:docker}
	I1210 05:46:03.727976   22347 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 05:46:03.728070   22347 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 05:46:03.756235   22347 cri.go:89] found id: "16d883ea0cc6779bde20ede57329324ccb3073fc4a4ace9d329105b630097e53"
	I1210 05:46:03.756257   22347 cri.go:89] found id: "736d6c57ec43c1049fc475cb75d66bd4e61af0f5fa34e42b665c70ba4390742c"
	I1210 05:46:03.756263   22347 cri.go:89] found id: "660e106c0ca888f87a50643d5adcd0d1151065c4341897cf2b65f1c18534f68f"
	I1210 05:46:03.756267   22347 cri.go:89] found id: "b77860e4ca7d8d9c02bcbed331e0cbb22323bb93c694b8969dae5e3caf82308b"
	I1210 05:46:03.756272   22347 cri.go:89] found id: "15bdf91e471254f93dee370bf1831f3912afc00e05382ad11815cbbab8f2e1d7"
	I1210 05:46:03.756276   22347 cri.go:89] found id: "b348e5c8e523a1f9eebbeccbb1a381248fcc876c68527ef07c501b958acbec62"
	I1210 05:46:03.756281   22347 cri.go:89] found id: "03c1319ba40adc6cc0c4630b22ba6b75c7514ebc2d7cf02eb7505833be94d7a7"
	I1210 05:46:03.756284   22347 cri.go:89] found id: "30e7ebcfff0650bcc7fdafd943ccd6f50a351909e0b9c33643660cfe8a925bfb"
	I1210 05:46:03.756289   22347 cri.go:89] found id: "1f872b473fd2ae84699c713f2ef8f124fd4fcdd418efbb37106de31bf37f116e"
	I1210 05:46:03.756308   22347 cri.go:89] found id: "304fa9c779484e5496a401ac38622fc781398b5378ffc456e3864b3d0825f120"
	I1210 05:46:03.756316   22347 cri.go:89] found id: "3d4ccc4d76ae4b3a4f2c820c2802b0218844b053079f83f8844177ffea9582be"
	I1210 05:46:03.756321   22347 cri.go:89] found id: "a0bbf399c11456bf767be1edadfa4ce06f450d80bdb74a4ff140d1658684ba30"
	I1210 05:46:03.756329   22347 cri.go:89] found id: "5f58fcc00134eb8d59a63529213019f5e50939e6fd4c584d6eff14ac2a6144e9"
	I1210 05:46:03.756334   22347 cri.go:89] found id: "dec533b105023287d9c5a2f8b2c9416ba56dda3bfc1421a5f53aab1805cf96be"
	I1210 05:46:03.756342   22347 cri.go:89] found id: "7c725f36dd3b4433100a50a43edc6ec082420363ce394e1342d7a178ca2f3ee5"
	I1210 05:46:03.756352   22347 cri.go:89] found id: "6ed5ed25f8d19e3ab10979fe0d41f814698164a6644627db3849c6e9209352d6"
	I1210 05:46:03.756358   22347 cri.go:89] found id: "9d1fa5291d10e03a9903b7e6298d010ed5ca423741104638ae3883dcb6a99dce"
	I1210 05:46:03.756362   22347 cri.go:89] found id: "58125e9bcfadd161d0334430d2e81b4b585bb9e189e3a652088e6fdbc00cdb98"
	I1210 05:46:03.756365   22347 cri.go:89] found id: "fbc11ef328020e6f9cbad908c90e044d4bb674441630aabf78830e7d07ac1671"
	I1210 05:46:03.756368   22347 cri.go:89] found id: "9497319e6c1c192902153d2ab92d489d5b12e5477a82f9c3e5dc7a7cb90e690d"
	I1210 05:46:03.756371   22347 cri.go:89] found id: "0122c6e10b651e471c57d0ec13f92f8bc142cb60e5d24dfbe157c9afb9176abb"
	I1210 05:46:03.756373   22347 cri.go:89] found id: "f1f5e9bce84f7b19972c44f0a37d275e958d15c03c9fc7f5cafd80b0328b7b15"
	I1210 05:46:03.756376   22347 cri.go:89] found id: "65e519df51c1d064d81c14c81e4eb34dfaf950890b576594d1ed96430518937a"
	I1210 05:46:03.756379   22347 cri.go:89] found id: "965d086a638c9808f443b112af7fab37ce3c8230ef95960da97133283a174896"
	I1210 05:46:03.756381   22347 cri.go:89] found id: ""
	I1210 05:46:03.756429   22347 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 05:46:03.770105   22347 out.go:203] 
	W1210 05:46:03.771220   22347 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T05:46:03Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T05:46:03Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 05:46:03.771236   22347 out.go:285] * 
	* 
	W1210 05:46:03.774184   22347 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 05:46:03.775383   22347 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:812: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-028052 --alsologtostderr -v=1": exit status 11
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect addons-028052
helpers_test.go:244: (dbg) docker inspect addons-028052:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "51b0c35579b592289c345d8ecf2bb629cbb5fc06f4baff5c9b882e5b7ea9bbd9",
	        "Created": "2025-12-10T05:44:14.519997499Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 14780,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T05:44:14.558891632Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9dfcc37acf4d8ed51daae49d651516447e95ced4bb0b0783e8c53cb79a74f008",
	        "ResolvConfPath": "/var/lib/docker/containers/51b0c35579b592289c345d8ecf2bb629cbb5fc06f4baff5c9b882e5b7ea9bbd9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/51b0c35579b592289c345d8ecf2bb629cbb5fc06f4baff5c9b882e5b7ea9bbd9/hostname",
	        "HostsPath": "/var/lib/docker/containers/51b0c35579b592289c345d8ecf2bb629cbb5fc06f4baff5c9b882e5b7ea9bbd9/hosts",
	        "LogPath": "/var/lib/docker/containers/51b0c35579b592289c345d8ecf2bb629cbb5fc06f4baff5c9b882e5b7ea9bbd9/51b0c35579b592289c345d8ecf2bb629cbb5fc06f4baff5c9b882e5b7ea9bbd9-json.log",
	        "Name": "/addons-028052",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-028052:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-028052",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "51b0c35579b592289c345d8ecf2bb629cbb5fc06f4baff5c9b882e5b7ea9bbd9",
	                "LowerDir": "/var/lib/docker/overlay2/bb4e53c07f08ba91546f608e922d047f47e2a74e9c07537bd03be60ccaba69fd-init/diff:/var/lib/docker/overlay2/5745aee6e8b05b3a4cc4ad6aee891df9d6438d830895f70bd2a764a976802708/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bb4e53c07f08ba91546f608e922d047f47e2a74e9c07537bd03be60ccaba69fd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bb4e53c07f08ba91546f608e922d047f47e2a74e9c07537bd03be60ccaba69fd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bb4e53c07f08ba91546f608e922d047f47e2a74e9c07537bd03be60ccaba69fd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-028052",
	                "Source": "/var/lib/docker/volumes/addons-028052/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-028052",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-028052",
	                "name.minikube.sigs.k8s.io": "addons-028052",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "a01efb72ebd326b77a7a2234e30dcd1c0f417d585e4d21113af5e3c2887e6c71",
	            "SandboxKey": "/var/run/docker/netns/a01efb72ebd3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-028052": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "089346d8609a769b6c49d5b936da4aac05f656c055bbbab0774e86d789ca5e72",
	                    "EndpointID": "84daf152514265731a1962fd2a5fd4d62b9e4c80bf9da5222828cdc0d99b979b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "ca:f3:9b:36:83:0e",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-028052",
	                        "51b0c35579b5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-028052 -n addons-028052
helpers_test.go:253: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-028052 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-028052 logs -n 25: (1.133194713s)
helpers_test.go:261: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-684743 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-684743   │ jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ delete  │ -p download-only-684743                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-684743   │ jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ start   │ -o=json --download-only -p download-only-560719 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-560719   │ jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ delete  │ -p download-only-560719                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-560719   │ jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ start   │ -o=json --download-only -p download-only-656073 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                         │ download-only-656073   │ jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ delete  │ -p download-only-656073                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-656073   │ jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ delete  │ -p download-only-684743                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-684743   │ jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ delete  │ -p download-only-560719                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-560719   │ jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ delete  │ -p download-only-656073                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-656073   │ jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ start   │ --download-only -p download-docker-773795 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-773795 │ jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │                     │
	│ delete  │ -p download-docker-773795                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-773795 │ jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ start   │ --download-only -p binary-mirror-740663 --alsologtostderr --binary-mirror http://127.0.0.1:43475 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-740663   │ jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │                     │
	│ delete  │ -p binary-mirror-740663                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-740663   │ jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ addons  │ disable dashboard -p addons-028052                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-028052          │ jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │                     │
	│ addons  │ enable dashboard -p addons-028052                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-028052          │ jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │                     │
	│ start   │ -p addons-028052 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-028052          │ jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:45 UTC │
	│ addons  │ addons-028052 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-028052          │ jenkins │ v1.37.0 │ 10 Dec 25 05:45 UTC │                     │
	│ addons  │ addons-028052 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-028052          │ jenkins │ v1.37.0 │ 10 Dec 25 05:46 UTC │                     │
	│ addons  │ enable headlamp -p addons-028052 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-028052          │ jenkins │ v1.37.0 │ 10 Dec 25 05:46 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 05:43:50
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 05:43:50.706588   14122 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:43:50.706831   14122 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:43:50.706841   14122 out.go:374] Setting ErrFile to fd 2...
	I1210 05:43:50.706845   14122 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:43:50.707023   14122 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8832/.minikube/bin
	I1210 05:43:50.707563   14122 out.go:368] Setting JSON to false
	I1210 05:43:50.708410   14122 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1582,"bootTime":1765343849,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 05:43:50.708481   14122 start.go:143] virtualization: kvm guest
	I1210 05:43:50.710460   14122 out.go:179] * [addons-028052] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 05:43:50.711809   14122 notify.go:221] Checking for updates...
	I1210 05:43:50.711865   14122 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 05:43:50.713275   14122 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 05:43:50.714587   14122 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-8832/kubeconfig
	I1210 05:43:50.715684   14122 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-8832/.minikube
	I1210 05:43:50.716966   14122 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 05:43:50.718390   14122 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 05:43:50.720044   14122 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 05:43:50.744230   14122 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 05:43:50.744342   14122 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:43:50.798942   14122 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-10 05:43:50.789304194 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 05:43:50.799041   14122 docker.go:319] overlay module found
	I1210 05:43:50.801024   14122 out.go:179] * Using the docker driver based on user configuration
	I1210 05:43:50.802615   14122 start.go:309] selected driver: docker
	I1210 05:43:50.802633   14122 start.go:927] validating driver "docker" against <nil>
	I1210 05:43:50.802644   14122 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 05:43:50.803220   14122 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:43:50.857926   14122 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-10 05:43:50.848909586 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 05:43:50.858056   14122 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1210 05:43:50.858250   14122 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 05:43:50.860189   14122 out.go:179] * Using Docker driver with root privileges
	I1210 05:43:50.861302   14122 cni.go:84] Creating CNI manager for ""
	I1210 05:43:50.861379   14122 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 05:43:50.861395   14122 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1210 05:43:50.861488   14122 start.go:353] cluster config:
	{Name:addons-028052 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-028052 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1210 05:43:50.862868   14122 out.go:179] * Starting "addons-028052" primary control-plane node in "addons-028052" cluster
	I1210 05:43:50.863971   14122 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 05:43:50.865319   14122 out.go:179] * Pulling base image v0.0.48-1765319469-22089 ...
	I1210 05:43:50.866684   14122 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 05:43:50.866717   14122 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-8832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1210 05:43:50.866723   14122 cache.go:65] Caching tarball of preloaded images
	I1210 05:43:50.866793   14122 preload.go:238] Found /home/jenkins/minikube-integration/22089-8832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 05:43:50.866805   14122 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1210 05:43:50.866792   14122 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon
	I1210 05:43:50.867112   14122 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/config.json ...
	I1210 05:43:50.867148   14122 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/config.json: {Name:mke5bc82231a890fb8e87878b1217790859e5087 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:43:50.883930   14122 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca to local cache
	I1210 05:43:50.884050   14122 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local cache directory
	I1210 05:43:50.884072   14122 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local cache directory, skipping pull
	I1210 05:43:50.884076   14122 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca exists in cache, skipping pull
	I1210 05:43:50.884083   14122 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca as a tarball
	I1210 05:43:50.884090   14122 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca from local cache
	I1210 05:44:04.204967   14122 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca from cached tarball
	I1210 05:44:04.205004   14122 cache.go:243] Successfully downloaded all kic artifacts
	I1210 05:44:04.205052   14122 start.go:360] acquireMachinesLock for addons-028052: {Name:mkb82df074e71d49290a9286f326d6fa899e9ce1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:44:04.205187   14122 start.go:364] duration metric: took 96.029µs to acquireMachinesLock for "addons-028052"
	I1210 05:44:04.205218   14122 start.go:93] Provisioning new machine with config: &{Name:addons-028052 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-028052 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 05:44:04.205297   14122 start.go:125] createHost starting for "" (driver="docker")
	I1210 05:44:04.207452   14122 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1210 05:44:04.207712   14122 start.go:159] libmachine.API.Create for "addons-028052" (driver="docker")
	I1210 05:44:04.207747   14122 client.go:173] LocalClient.Create starting
	I1210 05:44:04.207846   14122 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem
	I1210 05:44:04.301087   14122 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/cert.pem
	I1210 05:44:04.419732   14122 cli_runner.go:164] Run: docker network inspect addons-028052 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 05:44:04.439773   14122 cli_runner.go:211] docker network inspect addons-028052 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 05:44:04.439844   14122 network_create.go:284] running [docker network inspect addons-028052] to gather additional debugging logs...
	I1210 05:44:04.439863   14122 cli_runner.go:164] Run: docker network inspect addons-028052
	W1210 05:44:04.456581   14122 cli_runner.go:211] docker network inspect addons-028052 returned with exit code 1
	I1210 05:44:04.456608   14122 network_create.go:287] error running [docker network inspect addons-028052]: docker network inspect addons-028052: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-028052 not found
	I1210 05:44:04.456620   14122 network_create.go:289] output of [docker network inspect addons-028052]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-028052 not found
	
	** /stderr **
	I1210 05:44:04.456780   14122 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 05:44:04.476061   14122 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001db53b0}
	I1210 05:44:04.476146   14122 network_create.go:124] attempt to create docker network addons-028052 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1210 05:44:04.476200   14122 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-028052 addons-028052
	I1210 05:44:04.524230   14122 network_create.go:108] docker network addons-028052 192.168.49.0/24 created
	I1210 05:44:04.524264   14122 kic.go:121] calculated static IP "192.168.49.2" for the "addons-028052" container
	I1210 05:44:04.524380   14122 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 05:44:04.540855   14122 cli_runner.go:164] Run: docker volume create addons-028052 --label name.minikube.sigs.k8s.io=addons-028052 --label created_by.minikube.sigs.k8s.io=true
	I1210 05:44:04.559161   14122 oci.go:103] Successfully created a docker volume addons-028052
	I1210 05:44:04.559284   14122 cli_runner.go:164] Run: docker run --rm --name addons-028052-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-028052 --entrypoint /usr/bin/test -v addons-028052:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca -d /var/lib
	I1210 05:44:10.554517   14122 cli_runner.go:217] Completed: docker run --rm --name addons-028052-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-028052 --entrypoint /usr/bin/test -v addons-028052:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca -d /var/lib: (5.995142286s)
	I1210 05:44:10.554553   14122 oci.go:107] Successfully prepared a docker volume addons-028052
	I1210 05:44:10.554613   14122 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 05:44:10.554628   14122 kic.go:194] Starting extracting preloaded images to volume ...
	I1210 05:44:10.554699   14122 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22089-8832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-028052:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca -I lz4 -xf /preloaded.tar -C /extractDir
	I1210 05:44:14.449019   14122 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22089-8832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-028052:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca -I lz4 -xf /preloaded.tar -C /extractDir: (3.894271889s)
	I1210 05:44:14.449074   14122 kic.go:203] duration metric: took 3.894421924s to extract preloaded images to volume ...
	W1210 05:44:14.449201   14122 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1210 05:44:14.449239   14122 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1210 05:44:14.449297   14122 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 05:44:14.503943   14122 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-028052 --name addons-028052 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-028052 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-028052 --network addons-028052 --ip 192.168.49.2 --volume addons-028052:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca
	I1210 05:44:14.812049   14122 cli_runner.go:164] Run: docker container inspect addons-028052 --format={{.State.Running}}
	I1210 05:44:14.831237   14122 cli_runner.go:164] Run: docker container inspect addons-028052 --format={{.State.Status}}
	I1210 05:44:14.850767   14122 cli_runner.go:164] Run: docker exec addons-028052 stat /var/lib/dpkg/alternatives/iptables
	I1210 05:44:14.902942   14122 oci.go:144] the created container "addons-028052" has a running status.
	I1210 05:44:14.902977   14122 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22089-8832/.minikube/machines/addons-028052/id_rsa...
	I1210 05:44:15.010784   14122 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22089-8832/.minikube/machines/addons-028052/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 05:44:15.036596   14122 cli_runner.go:164] Run: docker container inspect addons-028052 --format={{.State.Status}}
	I1210 05:44:15.057120   14122 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 05:44:15.057146   14122 kic_runner.go:114] Args: [docker exec --privileged addons-028052 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 05:44:15.101605   14122 cli_runner.go:164] Run: docker container inspect addons-028052 --format={{.State.Status}}
	I1210 05:44:15.126879   14122 machine.go:94] provisionDockerMachine start ...
	I1210 05:44:15.126987   14122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028052
	I1210 05:44:15.153954   14122 main.go:143] libmachine: Using SSH client type: native
	I1210 05:44:15.154227   14122 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1210 05:44:15.154239   14122 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 05:44:15.155495   14122 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52150->127.0.0.1:32768: read: connection reset by peer
	I1210 05:44:18.287700   14122 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-028052
	
	I1210 05:44:18.287728   14122 ubuntu.go:182] provisioning hostname "addons-028052"
	I1210 05:44:18.287800   14122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028052
	I1210 05:44:18.306546   14122 main.go:143] libmachine: Using SSH client type: native
	I1210 05:44:18.306751   14122 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1210 05:44:18.306763   14122 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-028052 && echo "addons-028052" | sudo tee /etc/hostname
	I1210 05:44:18.446275   14122 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-028052
	
	I1210 05:44:18.446355   14122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028052
	I1210 05:44:18.464350   14122 main.go:143] libmachine: Using SSH client type: native
	I1210 05:44:18.464591   14122 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1210 05:44:18.464611   14122 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-028052' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-028052/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-028052' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 05:44:18.595160   14122 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 05:44:18.595185   14122 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22089-8832/.minikube CaCertPath:/home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22089-8832/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22089-8832/.minikube}
	I1210 05:44:18.595215   14122 ubuntu.go:190] setting up certificates
	I1210 05:44:18.595226   14122 provision.go:84] configureAuth start
	I1210 05:44:18.595270   14122 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-028052
	I1210 05:44:18.613098   14122 provision.go:143] copyHostCerts
	I1210 05:44:18.613179   14122 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22089-8832/.minikube/ca.pem (1078 bytes)
	I1210 05:44:18.613284   14122 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22089-8832/.minikube/cert.pem (1123 bytes)
	I1210 05:44:18.613342   14122 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22089-8832/.minikube/key.pem (1675 bytes)
	I1210 05:44:18.613394   14122 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22089-8832/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca-key.pem org=jenkins.addons-028052 san=[127.0.0.1 192.168.49.2 addons-028052 localhost minikube]
	I1210 05:44:18.689603   14122 provision.go:177] copyRemoteCerts
	I1210 05:44:18.689656   14122 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 05:44:18.689688   14122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028052
	I1210 05:44:18.707578   14122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/addons-028052/id_rsa Username:docker}
	I1210 05:44:18.802904   14122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 05:44:18.822660   14122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1210 05:44:18.840066   14122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 05:44:18.857327   14122 provision.go:87] duration metric: took 262.081326ms to configureAuth
	I1210 05:44:18.857376   14122 ubuntu.go:206] setting minikube options for container-runtime
	I1210 05:44:18.857569   14122 config.go:182] Loaded profile config "addons-028052": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 05:44:18.857663   14122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028052
	I1210 05:44:18.875525   14122 main.go:143] libmachine: Using SSH client type: native
	I1210 05:44:18.875786   14122 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1210 05:44:18.875811   14122 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 05:44:19.141280   14122 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 05:44:19.141303   14122 machine.go:97] duration metric: took 4.014400193s to provisionDockerMachine
	I1210 05:44:19.141313   14122 client.go:176] duration metric: took 14.933561238s to LocalClient.Create
	I1210 05:44:19.141340   14122 start.go:167] duration metric: took 14.933628326s to libmachine.API.Create "addons-028052"
	I1210 05:44:19.141349   14122 start.go:293] postStartSetup for "addons-028052" (driver="docker")
	I1210 05:44:19.141366   14122 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 05:44:19.141420   14122 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 05:44:19.141464   14122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028052
	I1210 05:44:19.158974   14122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/addons-028052/id_rsa Username:docker}
	I1210 05:44:19.254990   14122 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 05:44:19.258741   14122 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 05:44:19.258770   14122 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 05:44:19.258783   14122 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-8832/.minikube/addons for local assets ...
	I1210 05:44:19.258864   14122 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-8832/.minikube/files for local assets ...
	I1210 05:44:19.258899   14122 start.go:296] duration metric: took 117.541727ms for postStartSetup
	I1210 05:44:19.259213   14122 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-028052
	I1210 05:44:19.278082   14122 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/config.json ...
	I1210 05:44:19.278363   14122 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 05:44:19.278404   14122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028052
	I1210 05:44:19.298857   14122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/addons-028052/id_rsa Username:docker}
	I1210 05:44:19.390711   14122 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 05:44:19.395180   14122 start.go:128] duration metric: took 15.189870457s to createHost
	I1210 05:44:19.395206   14122 start.go:83] releasing machines lock for "addons-028052", held for 15.190001233s
	I1210 05:44:19.395265   14122 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-028052
	I1210 05:44:19.413527   14122 ssh_runner.go:195] Run: cat /version.json
	I1210 05:44:19.413577   14122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028052
	I1210 05:44:19.413618   14122 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 05:44:19.413701   14122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028052
	I1210 05:44:19.433824   14122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/addons-028052/id_rsa Username:docker}
	I1210 05:44:19.434405   14122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/addons-028052/id_rsa Username:docker}
	I1210 05:44:19.578583   14122 ssh_runner.go:195] Run: systemctl --version
	I1210 05:44:19.585100   14122 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 05:44:19.617892   14122 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 05:44:19.622301   14122 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 05:44:19.622371   14122 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 05:44:19.648715   14122 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 05:44:19.648738   14122 start.go:496] detecting cgroup driver to use...
	I1210 05:44:19.648765   14122 detect.go:190] detected "systemd" cgroup driver on host os
	I1210 05:44:19.648805   14122 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 05:44:19.664796   14122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 05:44:19.677032   14122 docker.go:218] disabling cri-docker service (if available) ...
	I1210 05:44:19.677100   14122 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 05:44:19.693394   14122 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 05:44:19.710723   14122 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 05:44:19.792617   14122 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 05:44:19.879870   14122 docker.go:234] disabling docker service ...
	I1210 05:44:19.879924   14122 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 05:44:19.898125   14122 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 05:44:19.910823   14122 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 05:44:19.997295   14122 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 05:44:20.074688   14122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 05:44:20.086838   14122 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 05:44:20.100849   14122 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 05:44:20.100903   14122 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 05:44:20.111336   14122 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1210 05:44:20.111397   14122 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 05:44:20.120179   14122 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 05:44:20.128758   14122 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 05:44:20.137459   14122 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 05:44:20.145448   14122 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 05:44:20.153890   14122 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 05:44:20.167192   14122 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 05:44:20.176200   14122 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 05:44:20.183953   14122 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1210 05:44:20.184017   14122 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1210 05:44:20.195718   14122 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 05:44:20.203278   14122 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:44:20.282107   14122 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 05:44:20.415220   14122 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 05:44:20.415285   14122 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 05:44:20.419229   14122 start.go:564] Will wait 60s for crictl version
	I1210 05:44:20.419280   14122 ssh_runner.go:195] Run: which crictl
	I1210 05:44:20.422851   14122 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 05:44:20.447061   14122 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 05:44:20.447186   14122 ssh_runner.go:195] Run: crio --version
	I1210 05:44:20.474522   14122 ssh_runner.go:195] Run: crio --version
	I1210 05:44:20.504341   14122 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1210 05:44:20.505521   14122 cli_runner.go:164] Run: docker network inspect addons-028052 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 05:44:20.522134   14122 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1210 05:44:20.526149   14122 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 05:44:20.536187   14122 kubeadm.go:884] updating cluster {Name:addons-028052 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-028052 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 05:44:20.536354   14122 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 05:44:20.536415   14122 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 05:44:20.567747   14122 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 05:44:20.567774   14122 crio.go:433] Images already preloaded, skipping extraction
	I1210 05:44:20.567815   14122 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 05:44:20.591705   14122 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 05:44:20.591727   14122 cache_images.go:86] Images are preloaded, skipping loading
	I1210 05:44:20.591734   14122 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1210 05:44:20.591815   14122 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-028052 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-028052 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 05:44:20.591874   14122 ssh_runner.go:195] Run: crio config
	I1210 05:44:20.635335   14122 cni.go:84] Creating CNI manager for ""
	I1210 05:44:20.635374   14122 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 05:44:20.635394   14122 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 05:44:20.635423   14122 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-028052 NodeName:addons-028052 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 05:44:20.635573   14122 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-028052"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 05:44:20.635647   14122 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1210 05:44:20.643500   14122 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 05:44:20.643564   14122 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 05:44:20.651537   14122 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1210 05:44:20.664135   14122 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 05:44:20.679376   14122 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1210 05:44:20.692200   14122 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1210 05:44:20.695857   14122 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 05:44:20.705748   14122 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:44:20.780990   14122 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 05:44:20.802026   14122 certs.go:69] Setting up /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052 for IP: 192.168.49.2
	I1210 05:44:20.802056   14122 certs.go:195] generating shared ca certs ...
	I1210 05:44:20.802075   14122 certs.go:227] acquiring lock for ca certs: {Name:mkfe434cecfa5233603e8d01fb39a21abb4f8ca4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:44:20.802196   14122 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22089-8832/.minikube/ca.key
	I1210 05:44:20.996787   14122 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-8832/.minikube/ca.crt ...
	I1210 05:44:20.996813   14122 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/ca.crt: {Name:mk1d513f296e0364032ebd95d26dea0f51debf57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:44:20.997012   14122 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-8832/.minikube/ca.key ...
	I1210 05:44:20.997029   14122 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/ca.key: {Name:mkdc1abbf79f324d72d891c5908933fa5d660c1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:44:20.997137   14122 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22089-8832/.minikube/proxy-client-ca.key
	I1210 05:44:21.114674   14122 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-8832/.minikube/proxy-client-ca.crt ...
	I1210 05:44:21.114703   14122 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/proxy-client-ca.crt: {Name:mkcb5cd5e73a33b179e01ea7cc46ae79b5b0a262 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:44:21.114880   14122 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-8832/.minikube/proxy-client-ca.key ...
	I1210 05:44:21.114893   14122 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/proxy-client-ca.key: {Name:mk1fcd7b0fcf2b218fdac4ffa80e78d4d2cd94f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:44:21.114994   14122 certs.go:257] generating profile certs ...
	I1210 05:44:21.115062   14122 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/client.key
	I1210 05:44:21.115077   14122 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/client.crt with IP's: []
	I1210 05:44:21.176961   14122 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/client.crt ...
	I1210 05:44:21.176991   14122 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/client.crt: {Name:mk7ac122e2baddd3f3b72bcf1a161b95df7673ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:44:21.177180   14122 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/client.key ...
	I1210 05:44:21.177193   14122 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/client.key: {Name:mkb65a53a364e0186156725a039b5fd6404ac52f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:44:21.177293   14122 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/apiserver.key.30876919
	I1210 05:44:21.177315   14122 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/apiserver.crt.30876919 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1210 05:44:21.222410   14122 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/apiserver.crt.30876919 ...
	I1210 05:44:21.222436   14122 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/apiserver.crt.30876919: {Name:mk7a476904d9d0a865c736e7fa3b577ceb879c35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:44:21.222628   14122 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/apiserver.key.30876919 ...
	I1210 05:44:21.222645   14122 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/apiserver.key.30876919: {Name:mk20a2ae7675a7d4b1a9b68da172b57b8b6ee2c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:44:21.222747   14122 certs.go:382] copying /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/apiserver.crt.30876919 -> /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/apiserver.crt
	I1210 05:44:21.222845   14122 certs.go:386] copying /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/apiserver.key.30876919 -> /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/apiserver.key
	I1210 05:44:21.222901   14122 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/proxy-client.key
	I1210 05:44:21.222918   14122 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/proxy-client.crt with IP's: []
	I1210 05:44:21.273763   14122 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/proxy-client.crt ...
	I1210 05:44:21.273791   14122 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/proxy-client.crt: {Name:mk53bf2b7a1ecbef55230cbac25da73eee95b050 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:44:21.273971   14122 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/proxy-client.key ...
	I1210 05:44:21.273987   14122 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/proxy-client.key: {Name:mkc842dbb4298c960e5236d1e4c6081c60234adc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:44:21.274180   14122 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca-key.pem (1679 bytes)
	I1210 05:44:21.274215   14122 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem (1078 bytes)
	I1210 05:44:21.274240   14122 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/cert.pem (1123 bytes)
	I1210 05:44:21.274262   14122 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/key.pem (1675 bytes)
	I1210 05:44:21.274867   14122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 05:44:21.292972   14122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 05:44:21.311017   14122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 05:44:21.329010   14122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 05:44:21.346674   14122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1210 05:44:21.364311   14122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 05:44:21.381735   14122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 05:44:21.399020   14122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 05:44:21.417304   14122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 05:44:21.436288   14122 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 05:44:21.449088   14122 ssh_runner.go:195] Run: openssl version
	I1210 05:44:21.455224   14122 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:44:21.462703   14122 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 05:44:21.472792   14122 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:44:21.476794   14122 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:44:21.476846   14122 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:44:21.510493   14122 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 05:44:21.518858   14122 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 05:44:21.527079   14122 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 05:44:21.531225   14122 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 05:44:21.531283   14122 kubeadm.go:401] StartCluster: {Name:addons-028052 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-028052 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:44:21.531365   14122 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 05:44:21.531439   14122 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 05:44:21.559720   14122 cri.go:89] found id: ""
	I1210 05:44:21.559786   14122 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 05:44:21.567906   14122 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 05:44:21.575565   14122 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 05:44:21.575626   14122 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 05:44:21.583217   14122 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 05:44:21.583235   14122 kubeadm.go:158] found existing configuration files:
	
	I1210 05:44:21.583282   14122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 05:44:21.590827   14122 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 05:44:21.590887   14122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 05:44:21.598178   14122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 05:44:21.605715   14122 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 05:44:21.605775   14122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 05:44:21.613072   14122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 05:44:21.620285   14122 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 05:44:21.620359   14122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 05:44:21.627898   14122 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 05:44:21.635585   14122 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 05:44:21.635641   14122 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 05:44:21.642808   14122 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 05:44:21.680201   14122 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1210 05:44:21.680290   14122 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 05:44:21.700525   14122 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 05:44:21.700588   14122 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1210 05:44:21.700618   14122 kubeadm.go:319] OS: Linux
	I1210 05:44:21.700669   14122 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 05:44:21.700733   14122 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 05:44:21.700830   14122 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 05:44:21.700922   14122 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 05:44:21.700994   14122 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 05:44:21.701076   14122 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 05:44:21.701165   14122 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 05:44:21.701256   14122 kubeadm.go:319] CGROUPS_IO: enabled
	I1210 05:44:21.755638   14122 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 05:44:21.755818   14122 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 05:44:21.755959   14122 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 05:44:21.762516   14122 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 05:44:21.765389   14122 out.go:252]   - Generating certificates and keys ...
	I1210 05:44:21.765508   14122 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 05:44:21.765621   14122 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 05:44:22.405231   14122 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 05:44:22.530593   14122 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 05:44:22.809538   14122 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 05:44:23.110980   14122 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 05:44:23.197722   14122 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 05:44:23.197881   14122 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-028052 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1210 05:44:23.624843   14122 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 05:44:23.624956   14122 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-028052 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1210 05:44:23.957276   14122 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 05:44:24.334748   14122 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 05:44:24.424527   14122 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 05:44:24.424607   14122 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 05:44:24.712431   14122 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 05:44:25.151928   14122 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 05:44:25.500330   14122 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 05:44:25.523126   14122 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 05:44:25.991809   14122 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 05:44:25.992188   14122 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 05:44:25.995755   14122 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 05:44:25.998415   14122 out.go:252]   - Booting up control plane ...
	I1210 05:44:25.998564   14122 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 05:44:25.998655   14122 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 05:44:25.998799   14122 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 05:44:26.022820   14122 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 05:44:26.022966   14122 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 05:44:26.029616   14122 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 05:44:26.029722   14122 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 05:44:26.029763   14122 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 05:44:26.130994   14122 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 05:44:26.131136   14122 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 05:44:26.632880   14122 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.940174ms
	I1210 05:44:26.636635   14122 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1210 05:44:26.636751   14122 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1210 05:44:26.636846   14122 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1210 05:44:26.636927   14122 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1210 05:44:28.186939   14122 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.550285711s
	I1210 05:44:28.989612   14122 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.352995916s
	I1210 05:44:30.637986   14122 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001337765s
	I1210 05:44:30.653251   14122 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 05:44:30.666039   14122 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 05:44:30.677682   14122 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 05:44:30.677945   14122 kubeadm.go:319] [mark-control-plane] Marking the node addons-028052 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 05:44:30.686801   14122 kubeadm.go:319] [bootstrap-token] Using token: 0fuqj9.zxta1qtzv9xa5hm8
	I1210 05:44:30.688710   14122 out.go:252]   - Configuring RBAC rules ...
	I1210 05:44:30.688863   14122 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 05:44:30.692429   14122 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 05:44:30.698509   14122 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 05:44:30.702387   14122 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 05:44:30.705073   14122 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 05:44:30.708364   14122 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 05:44:31.045157   14122 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 05:44:31.459023   14122 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1210 05:44:32.043534   14122 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1210 05:44:32.044373   14122 kubeadm.go:319] 
	I1210 05:44:32.044506   14122 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1210 05:44:32.044518   14122 kubeadm.go:319] 
	I1210 05:44:32.044652   14122 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1210 05:44:32.044673   14122 kubeadm.go:319] 
	I1210 05:44:32.044715   14122 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1210 05:44:32.044799   14122 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 05:44:32.044899   14122 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 05:44:32.044915   14122 kubeadm.go:319] 
	I1210 05:44:32.045006   14122 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1210 05:44:32.045018   14122 kubeadm.go:319] 
	I1210 05:44:32.045093   14122 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 05:44:32.045103   14122 kubeadm.go:319] 
	I1210 05:44:32.045173   14122 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1210 05:44:32.045290   14122 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 05:44:32.045400   14122 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 05:44:32.045414   14122 kubeadm.go:319] 
	I1210 05:44:32.045548   14122 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 05:44:32.045650   14122 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1210 05:44:32.045658   14122 kubeadm.go:319] 
	I1210 05:44:32.045787   14122 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 0fuqj9.zxta1qtzv9xa5hm8 \
	I1210 05:44:32.045927   14122 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:63e262019a0228173b835d7feaf739daf8c2f986042fc20415163ebad5fe89a5 \
	I1210 05:44:32.045959   14122 kubeadm.go:319] 	--control-plane 
	I1210 05:44:32.045967   14122 kubeadm.go:319] 
	I1210 05:44:32.046090   14122 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1210 05:44:32.046118   14122 kubeadm.go:319] 
	I1210 05:44:32.046262   14122 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 0fuqj9.zxta1qtzv9xa5hm8 \
	I1210 05:44:32.046412   14122 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:63e262019a0228173b835d7feaf739daf8c2f986042fc20415163ebad5fe89a5 
	I1210 05:44:32.048269   14122 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1210 05:44:32.048401   14122 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 05:44:32.048431   14122 cni.go:84] Creating CNI manager for ""
	I1210 05:44:32.048440   14122 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 05:44:32.050453   14122 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1210 05:44:32.051993   14122 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1210 05:44:32.056198   14122 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1210 05:44:32.056221   14122 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1210 05:44:32.069246   14122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1210 05:44:32.269304   14122 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 05:44:32.269373   14122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:44:32.269392   14122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-028052 minikube.k8s.io/updated_at=2025_12_10T05_44_32_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=edc6abd3c0573b88c7a02dc35aa0b985627fa3e9 minikube.k8s.io/name=addons-028052 minikube.k8s.io/primary=true
	I1210 05:44:32.350271   14122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:44:32.359606   14122 ops.go:34] apiserver oom_adj: -16
	I1210 05:44:32.850320   14122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:44:33.350316   14122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:44:33.850615   14122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:44:34.350340   14122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:44:34.851300   14122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:44:35.351399   14122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:44:35.851528   14122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:44:36.350527   14122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:44:36.851219   14122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 05:44:36.915407   14122 kubeadm.go:1114] duration metric: took 4.646092846s to wait for elevateKubeSystemPrivileges
	I1210 05:44:36.915440   14122 kubeadm.go:403] duration metric: took 15.384164126s to StartCluster
	I1210 05:44:36.915459   14122 settings.go:142] acquiring lock: {Name:mkcfa52e2e09cf8266d26c2d1d1f162454a79515 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:44:36.915585   14122 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22089-8832/kubeconfig
	I1210 05:44:36.915943   14122 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/kubeconfig: {Name:mk2d0febd8c6a30a71f02d20e2057fd6d147cd76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:44:36.916151   14122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1210 05:44:36.916185   14122 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 05:44:36.916232   14122 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1210 05:44:36.916351   14122 addons.go:70] Setting yakd=true in profile "addons-028052"
	I1210 05:44:36.916362   14122 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-028052"
	I1210 05:44:36.916376   14122 addons.go:239] Setting addon yakd=true in "addons-028052"
	I1210 05:44:36.916383   14122 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-028052"
	I1210 05:44:36.916395   14122 addons.go:70] Setting registry=true in profile "addons-028052"
	I1210 05:44:36.916421   14122 host.go:66] Checking if "addons-028052" exists ...
	I1210 05:44:36.916426   14122 addons.go:70] Setting registry-creds=true in profile "addons-028052"
	I1210 05:44:36.916428   14122 config.go:182] Loaded profile config "addons-028052": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 05:44:36.916433   14122 addons.go:70] Setting default-storageclass=true in profile "addons-028052"
	I1210 05:44:36.916460   14122 addons.go:239] Setting addon registry-creds=true in "addons-028052"
	I1210 05:44:36.916486   14122 addons.go:70] Setting ingress-dns=true in profile "addons-028052"
	I1210 05:44:36.916493   14122 host.go:66] Checking if "addons-028052" exists ...
	I1210 05:44:36.916442   14122 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-028052"
	I1210 05:44:36.916496   14122 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-028052"
	I1210 05:44:36.916507   14122 addons.go:70] Setting cloud-spanner=true in profile "addons-028052"
	I1210 05:44:36.916518   14122 addons.go:70] Setting inspektor-gadget=true in profile "addons-028052"
	I1210 05:44:36.916524   14122 addons.go:239] Setting addon cloud-spanner=true in "addons-028052"
	I1210 05:44:36.916536   14122 addons.go:239] Setting addon inspektor-gadget=true in "addons-028052"
	I1210 05:44:36.916550   14122 host.go:66] Checking if "addons-028052" exists ...
	I1210 05:44:36.916590   14122 host.go:66] Checking if "addons-028052" exists ...
	I1210 05:44:36.916599   14122 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-028052"
	I1210 05:44:36.916612   14122 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-028052"
	I1210 05:44:36.916730   14122 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-028052"
	I1210 05:44:36.916777   14122 host.go:66] Checking if "addons-028052" exists ...
	I1210 05:44:36.916840   14122 cli_runner.go:164] Run: docker container inspect addons-028052 --format={{.State.Status}}
	I1210 05:44:36.916899   14122 cli_runner.go:164] Run: docker container inspect addons-028052 --format={{.State.Status}}
	I1210 05:44:36.916975   14122 cli_runner.go:164] Run: docker container inspect addons-028052 --format={{.State.Status}}
	I1210 05:44:36.916986   14122 cli_runner.go:164] Run: docker container inspect addons-028052 --format={{.State.Status}}
	I1210 05:44:36.917017   14122 cli_runner.go:164] Run: docker container inspect addons-028052 --format={{.State.Status}}
	I1210 05:44:36.917072   14122 cli_runner.go:164] Run: docker container inspect addons-028052 --format={{.State.Status}}
	I1210 05:44:36.917307   14122 cli_runner.go:164] Run: docker container inspect addons-028052 --format={{.State.Status}}
	I1210 05:44:36.917356   14122 addons.go:70] Setting metrics-server=true in profile "addons-028052"
	I1210 05:44:36.917379   14122 addons.go:239] Setting addon metrics-server=true in "addons-028052"
	I1210 05:44:36.917415   14122 host.go:66] Checking if "addons-028052" exists ...
	I1210 05:44:36.916353   14122 addons.go:70] Setting gcp-auth=true in profile "addons-028052"
	I1210 05:44:36.917606   14122 mustload.go:66] Loading cluster: addons-028052
	I1210 05:44:36.917867   14122 config.go:182] Loaded profile config "addons-028052": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 05:44:36.918157   14122 cli_runner.go:164] Run: docker container inspect addons-028052 --format={{.State.Status}}
	I1210 05:44:36.918240   14122 addons.go:70] Setting storage-provisioner=true in profile "addons-028052"
	I1210 05:44:36.918263   14122 addons.go:239] Setting addon storage-provisioner=true in "addons-028052"
	I1210 05:44:36.918287   14122 host.go:66] Checking if "addons-028052" exists ...
	I1210 05:44:36.916454   14122 addons.go:239] Setting addon registry=true in "addons-028052"
	I1210 05:44:36.918674   14122 host.go:66] Checking if "addons-028052" exists ...
	I1210 05:44:36.916420   14122 host.go:66] Checking if "addons-028052" exists ...
	I1210 05:44:36.918179   14122 addons.go:70] Setting volcano=true in profile "addons-028052"
	I1210 05:44:36.918940   14122 addons.go:239] Setting addon volcano=true in "addons-028052"
	I1210 05:44:36.918965   14122 host.go:66] Checking if "addons-028052" exists ...
	I1210 05:44:36.918159   14122 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-028052"
	I1210 05:44:36.919092   14122 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-028052"
	I1210 05:44:36.919121   14122 host.go:66] Checking if "addons-028052" exists ...
	I1210 05:44:36.919228   14122 cli_runner.go:164] Run: docker container inspect addons-028052 --format={{.State.Status}}
	I1210 05:44:36.916461   14122 addons.go:70] Setting ingress=true in profile "addons-028052"
	I1210 05:44:36.919285   14122 addons.go:239] Setting addon ingress=true in "addons-028052"
	I1210 05:44:36.919315   14122 host.go:66] Checking if "addons-028052" exists ...
	I1210 05:44:36.919373   14122 cli_runner.go:164] Run: docker container inspect addons-028052 --format={{.State.Status}}
	I1210 05:44:36.919559   14122 out.go:179] * Verifying Kubernetes components...
	I1210 05:44:36.918191   14122 addons.go:70] Setting volumesnapshots=true in profile "addons-028052"
	I1210 05:44:36.919654   14122 addons.go:239] Setting addon volumesnapshots=true in "addons-028052"
	I1210 05:44:36.919680   14122 host.go:66] Checking if "addons-028052" exists ...
	I1210 05:44:36.916501   14122 addons.go:239] Setting addon ingress-dns=true in "addons-028052"
	I1210 05:44:36.920026   14122 host.go:66] Checking if "addons-028052" exists ...
	I1210 05:44:36.921287   14122 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:44:36.924995   14122 cli_runner.go:164] Run: docker container inspect addons-028052 --format={{.State.Status}}
	I1210 05:44:36.925131   14122 cli_runner.go:164] Run: docker container inspect addons-028052 --format={{.State.Status}}
	I1210 05:44:36.925135   14122 cli_runner.go:164] Run: docker container inspect addons-028052 --format={{.State.Status}}
	I1210 05:44:36.925283   14122 cli_runner.go:164] Run: docker container inspect addons-028052 --format={{.State.Status}}
	I1210 05:44:36.925733   14122 cli_runner.go:164] Run: docker container inspect addons-028052 --format={{.State.Status}}
	I1210 05:44:36.929403   14122 cli_runner.go:164] Run: docker container inspect addons-028052 --format={{.State.Status}}
	I1210 05:44:36.929561   14122 cli_runner.go:164] Run: docker container inspect addons-028052 --format={{.State.Status}}
	I1210 05:44:36.965619   14122 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1210 05:44:36.970115   14122 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1210 05:44:36.970136   14122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1210 05:44:36.970215   14122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028052
	I1210 05:44:36.982194   14122 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-028052"
	I1210 05:44:36.982248   14122 host.go:66] Checking if "addons-028052" exists ...
	I1210 05:44:36.982786   14122 cli_runner.go:164] Run: docker container inspect addons-028052 --format={{.State.Status}}
	I1210 05:44:36.996915   14122 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1210 05:44:36.998557   14122 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1210 05:44:36.998581   14122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1210 05:44:36.998640   14122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028052
	I1210 05:44:37.011279   14122 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1210 05:44:37.014638   14122 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1210 05:44:37.014662   14122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1210 05:44:37.014745   14122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028052
	I1210 05:44:37.026143   14122 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1210 05:44:37.026362   14122 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1210 05:44:37.027677   14122 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1210 05:44:37.027700   14122 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1210 05:44:37.027790   14122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028052
	W1210 05:44:37.028641   14122 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1210 05:44:37.029439   14122 addons.go:239] Setting addon default-storageclass=true in "addons-028052"
	I1210 05:44:37.029692   14122 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1210 05:44:37.029499   14122 host.go:66] Checking if "addons-028052" exists ...
	I1210 05:44:37.031514   14122 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1210 05:44:37.031562   14122 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1210 05:44:37.031607   14122 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 05:44:37.031613   14122 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1210 05:44:37.031767   14122 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1210 05:44:37.033771   14122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1210 05:44:37.033872   14122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028052
	I1210 05:44:37.035034   14122 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1210 05:44:37.035085   14122 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1210 05:44:37.035099   14122 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1210 05:44:37.035125   14122 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1210 05:44:37.035151   14122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1210 05:44:37.035157   14122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028052
	I1210 05:44:37.035226   14122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028052
	I1210 05:44:37.035420   14122 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1210 05:44:37.035509   14122 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:44:37.035519   14122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 05:44:37.035568   14122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028052
	I1210 05:44:37.036421   14122 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1210 05:44:37.036542   14122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1210 05:44:37.036647   14122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028052
	I1210 05:44:37.036589   14122 out.go:179]   - Using image docker.io/registry:3.0.0
	I1210 05:44:37.038417   14122 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1210 05:44:37.039694   14122 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1210 05:44:37.039878   14122 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1210 05:44:37.039918   14122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1210 05:44:37.039956   14122 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1210 05:44:37.039976   14122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1210 05:44:37.039988   14122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028052
	I1210 05:44:37.040019   14122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028052
	I1210 05:44:37.044496   14122 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1210 05:44:37.049338   14122 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1210 05:44:37.052265   14122 cli_runner.go:164] Run: docker container inspect addons-028052 --format={{.State.Status}}
	I1210 05:44:37.053019   14122 host.go:66] Checking if "addons-028052" exists ...
	I1210 05:44:37.054750   14122 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1210 05:44:37.059136   14122 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1210 05:44:37.062134   14122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/addons-028052/id_rsa Username:docker}
	I1210 05:44:37.062274   14122 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1210 05:44:37.062445   14122 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1210 05:44:37.062921   14122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1210 05:44:37.063717   14122 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1210 05:44:37.063793   14122 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1210 05:44:37.063807   14122 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1210 05:44:37.063893   14122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028052
	I1210 05:44:37.065852   14122 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1210 05:44:37.066181   14122 out.go:179]   - Using image docker.io/busybox:stable
	I1210 05:44:37.067317   14122 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1210 05:44:37.067374   14122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1210 05:44:37.067462   14122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028052
	I1210 05:44:37.067502   14122 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1210 05:44:37.077665   14122 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1210 05:44:37.077692   14122 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1210 05:44:37.077763   14122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028052
	I1210 05:44:37.123224   14122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/addons-028052/id_rsa Username:docker}
	I1210 05:44:37.123823   14122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/addons-028052/id_rsa Username:docker}
	I1210 05:44:37.124606   14122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/addons-028052/id_rsa Username:docker}
	I1210 05:44:37.130440   14122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/addons-028052/id_rsa Username:docker}
	I1210 05:44:37.133236   14122 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 05:44:37.133258   14122 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 05:44:37.133314   14122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028052
	I1210 05:44:37.136208   14122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/addons-028052/id_rsa Username:docker}
	I1210 05:44:37.172758   14122 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 05:44:37.176987   14122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/addons-028052/id_rsa Username:docker}
	I1210 05:44:37.178461   14122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/addons-028052/id_rsa Username:docker}
	I1210 05:44:37.194807   14122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/addons-028052/id_rsa Username:docker}
	I1210 05:44:37.194859   14122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/addons-028052/id_rsa Username:docker}
	I1210 05:44:37.203591   14122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/addons-028052/id_rsa Username:docker}
	W1210 05:44:37.207854   14122 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1210 05:44:37.208052   14122 retry.go:31] will retry after 242.112943ms: ssh: handshake failed: EOF
	I1210 05:44:37.210490   14122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/addons-028052/id_rsa Username:docker}
	I1210 05:44:37.210393   14122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/addons-028052/id_rsa Username:docker}
	W1210 05:44:37.218536   14122 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1210 05:44:37.218570   14122 retry.go:31] will retry after 261.844164ms: ssh: handshake failed: EOF
	I1210 05:44:37.219758   14122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/addons-028052/id_rsa Username:docker}
	I1210 05:44:37.231388   14122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/addons-028052/id_rsa Username:docker}
	I1210 05:44:37.333046   14122 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1210 05:44:37.333068   14122 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1210 05:44:37.339629   14122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1210 05:44:37.340239   14122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1210 05:44:37.344570   14122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1210 05:44:37.355786   14122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1210 05:44:37.362705   14122 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1210 05:44:37.362732   14122 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1210 05:44:37.363771   14122 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1210 05:44:37.363794   14122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1210 05:44:37.365359   14122 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1210 05:44:37.365379   14122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1210 05:44:37.367873   14122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1210 05:44:37.382710   14122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:44:37.384080   14122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1210 05:44:37.396128   14122 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1210 05:44:37.396160   14122 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1210 05:44:37.401107   14122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1210 05:44:37.401879   14122 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1210 05:44:37.401901   14122 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1210 05:44:37.402874   14122 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1210 05:44:37.402889   14122 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1210 05:44:37.402892   14122 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1210 05:44:37.402907   14122 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1210 05:44:37.410116   14122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1210 05:44:37.441543   14122 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1210 05:44:37.441591   14122 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1210 05:44:37.444696   14122 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1210 05:44:37.444718   14122 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1210 05:44:37.454408   14122 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 05:44:37.454431   14122 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1210 05:44:37.466734   14122 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1210 05:44:37.466764   14122 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1210 05:44:37.499004   14122 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1210 05:44:37.499027   14122 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1210 05:44:37.517126   14122 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1210 05:44:37.517159   14122 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1210 05:44:37.521942   14122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 05:44:37.527927   14122 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1210 05:44:37.527952   14122 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1210 05:44:37.537017   14122 node_ready.go:35] waiting up to 6m0s for node "addons-028052" to be "Ready" ...
	I1210 05:44:37.537297   14122 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1210 05:44:37.574373   14122 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1210 05:44:37.574395   14122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1210 05:44:37.576094   14122 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1210 05:44:37.576114   14122 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1210 05:44:37.613145   14122 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1210 05:44:37.613173   14122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1210 05:44:37.625779   14122 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1210 05:44:37.625895   14122 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1210 05:44:37.647792   14122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1210 05:44:37.690732   14122 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1210 05:44:37.690764   14122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1210 05:44:37.694199   14122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1210 05:44:37.709767   14122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:44:37.728327   14122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1210 05:44:37.754995   14122 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1210 05:44:37.755020   14122 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1210 05:44:37.789895   14122 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1210 05:44:37.789923   14122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1210 05:44:37.822506   14122 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1210 05:44:37.822545   14122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1210 05:44:37.871415   14122 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1210 05:44:37.871443   14122 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1210 05:44:37.904015   14122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1210 05:44:38.047851   14122 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-028052" context rescaled to 1 replicas
	I1210 05:44:38.568903   14122 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.167759983s)
	I1210 05:44:38.568903   14122 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.184783302s)
	I1210 05:44:38.568957   14122 addons.go:495] Verifying addon ingress=true in "addons-028052"
	I1210 05:44:38.568959   14122 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.15879269s)
	I1210 05:44:38.568985   14122 addons.go:495] Verifying addon registry=true in "addons-028052"
	I1210 05:44:38.569137   14122 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.047163203s)
	I1210 05:44:38.569165   14122 addons.go:495] Verifying addon metrics-server=true in "addons-028052"
	I1210 05:44:38.570608   14122 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-028052 service yakd-dashboard -n yakd-dashboard
	
	I1210 05:44:38.570617   14122 out.go:179] * Verifying ingress addon...
	I1210 05:44:38.570610   14122 out.go:179] * Verifying registry addon...
	I1210 05:44:38.572703   14122 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1210 05:44:38.572819   14122 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1210 05:44:38.575417   14122 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1210 05:44:38.575541   14122 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1210 05:44:38.575556   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:39.009973   14122 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.315713868s)
	W1210 05:44:39.010024   14122 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1210 05:44:39.010062   14122 retry.go:31] will retry after 152.124693ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1210 05:44:39.010073   14122 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.300248606s)
	I1210 05:44:39.010143   14122 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.28178285s)
	I1210 05:44:39.010361   14122 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.106308913s)
	I1210 05:44:39.010380   14122 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-028052"
	I1210 05:44:39.012936   14122 out.go:179] * Verifying csi-hostpath-driver addon...
	I1210 05:44:39.015215   14122 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1210 05:44:39.017875   14122 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1210 05:44:39.017897   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:44:39.075670   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:39.075865   14122 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1210 05:44:39.075886   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:39.162949   14122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1210 05:44:39.519243   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 05:44:39.539855   14122 node_ready.go:57] node "addons-028052" has "Ready":"False" status (will retry)
	I1210 05:44:39.619903   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:39.620089   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:40.018255   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:44:40.119744   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:40.119952   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:40.518334   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:44:40.619489   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:40.619567   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:41.018871   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:44:41.119712   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:41.119919   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:41.518896   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 05:44:41.540759   14122 node_ready.go:57] node "addons-028052" has "Ready":"False" status (will retry)
	I1210 05:44:41.575598   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:41.575934   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:41.634370   14122 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.47137967s)
	I1210 05:44:42.018533   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:44:42.119233   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:42.119513   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:42.518546   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:44:42.575545   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:42.575545   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:43.018971   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:44:43.120233   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:43.120595   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:43.518741   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:44:43.575634   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:43.575845   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:44.018806   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 05:44:44.040236   14122 node_ready.go:57] node "addons-028052" has "Ready":"False" status (will retry)
	I1210 05:44:44.119986   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:44.120041   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:44.518691   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:44:44.575616   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:44.575832   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:44.660283   14122 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1210 05:44:44.660347   14122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028052
	I1210 05:44:44.678548   14122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/addons-028052/id_rsa Username:docker}
	I1210 05:44:44.785765   14122 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1210 05:44:44.799188   14122 addons.go:239] Setting addon gcp-auth=true in "addons-028052"
	I1210 05:44:44.799264   14122 host.go:66] Checking if "addons-028052" exists ...
	I1210 05:44:44.799672   14122 cli_runner.go:164] Run: docker container inspect addons-028052 --format={{.State.Status}}
	I1210 05:44:44.817202   14122 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1210 05:44:44.817274   14122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028052
	I1210 05:44:44.834834   14122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/addons-028052/id_rsa Username:docker}
	I1210 05:44:44.928692   14122 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1210 05:44:44.930021   14122 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1210 05:44:44.931319   14122 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1210 05:44:44.931332   14122 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1210 05:44:44.945014   14122 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1210 05:44:44.945040   14122 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1210 05:44:44.958160   14122 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1210 05:44:44.958183   14122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1210 05:44:44.971062   14122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1210 05:44:45.018505   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:44:45.076062   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:45.076251   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:45.284550   14122 addons.go:495] Verifying addon gcp-auth=true in "addons-028052"
	I1210 05:44:45.286541   14122 out.go:179] * Verifying gcp-auth addon...
	I1210 05:44:45.288685   14122 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1210 05:44:45.292285   14122 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1210 05:44:45.292305   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:44:45.518259   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:44:45.575438   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:45.575773   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:45.791605   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:44:46.017896   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 05:44:46.040368   14122 node_ready.go:57] node "addons-028052" has "Ready":"False" status (will retry)
	I1210 05:44:46.076022   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:46.076147   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:46.291948   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:44:46.518716   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:44:46.575815   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:46.576023   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:46.791642   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:44:47.018444   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:44:47.076509   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:47.076769   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:47.292418   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:44:47.517981   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:44:47.577254   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:47.577421   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:47.792103   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:44:48.018796   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:44:48.075689   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:48.075858   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:48.291702   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:44:48.518663   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 05:44:48.540175   14122 node_ready.go:57] node "addons-028052" has "Ready":"False" status (will retry)
	I1210 05:44:48.576000   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:48.576080   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:48.791818   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:44:49.018572   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:44:49.075558   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:49.075828   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:49.292414   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:44:49.518023   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:44:49.575873   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:49.576038   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:49.792018   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:44:50.018624   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:44:50.075408   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:50.075639   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:50.292781   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:44:50.518382   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:44:50.575703   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:50.575771   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:50.791583   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:44:51.018204   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 05:44:51.039960   14122 node_ready.go:57] node "addons-028052" has "Ready":"False" status (will retry)
	I1210 05:44:51.075823   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:51.075928   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:51.291967   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:44:51.518564   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:44:51.575747   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:51.575912   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:51.791445   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:44:52.018299   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:44:52.076209   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:52.076431   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:52.292348   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:44:52.518212   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:44:52.576151   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:52.576337   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:52.791772   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:44:53.018220   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:44:53.076212   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:53.076278   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:53.291869   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:44:53.518557   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 05:44:53.540232   14122 node_ready.go:57] node "addons-028052" has "Ready":"False" status (will retry)
	I1210 05:44:53.575797   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:53.576115   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:53.791532   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:44:54.017890   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:44:54.075607   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:54.075689   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:54.291396   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:44:54.518016   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:44:54.576282   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:54.576324   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:54.792054   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:44:55.018618   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:44:55.075404   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:55.075582   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:55.291799   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:44:55.518391   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:44:55.575686   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:55.575823   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:55.791605   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:44:56.018222   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 05:44:56.039690   14122 node_ready.go:57] node "addons-028052" has "Ready":"False" status (will retry)
	I1210 05:44:56.076086   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:56.076238   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:56.291822   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:44:56.518287   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:44:56.576212   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:56.576359   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:56.791973   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:44:57.017741   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:44:57.075757   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:57.075905   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:57.291668   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:44:57.518453   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:44:57.575513   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:57.575619   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:57.792238   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:44:58.017816   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 05:44:58.040312   14122 node_ready.go:57] node "addons-028052" has "Ready":"False" status (will retry)
	I1210 05:44:58.075676   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:58.075868   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:58.291359   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:44:58.517911   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:44:58.575776   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:58.575867   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:58.791272   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:44:59.017604   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:44:59.075639   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:59.075705   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:59.291120   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:44:59.517604   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:44:59.575660   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:44:59.575687   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:44:59.791891   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:00.018715   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:00.075707   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:00.075730   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:00.291433   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:00.518020   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 05:45:00.540243   14122 node_ready.go:57] node "addons-028052" has "Ready":"False" status (will retry)
	I1210 05:45:00.576051   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:00.576244   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:00.792122   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:01.017683   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:01.075696   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:01.075912   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:01.291723   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:01.518401   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:01.575611   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:01.575663   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:01.791427   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:02.018009   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:02.076335   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:02.076514   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:02.292242   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:02.517897   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 05:45:02.540626   14122 node_ready.go:57] node "addons-028052" has "Ready":"False" status (will retry)
	I1210 05:45:02.576090   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:02.576243   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:02.791697   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:03.018235   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:03.075610   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:03.075755   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:03.292412   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:03.518043   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:03.575990   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:03.576133   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:03.791739   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:04.018368   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:04.075307   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:04.075418   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:04.291968   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:04.518771   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:04.575869   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:04.575914   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:04.791366   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:05.017946   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 05:45:05.040545   14122 node_ready.go:57] node "addons-028052" has "Ready":"False" status (will retry)
	I1210 05:45:05.076141   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:05.076253   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:05.291805   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:05.518723   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:05.575942   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:05.576006   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:05.792126   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:06.017515   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:06.075524   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:06.075637   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:06.291772   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:06.518541   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:06.575531   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:06.575700   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:06.792096   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:07.018524   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:07.075606   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:07.075761   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:07.291526   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:07.518176   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 05:45:07.539513   14122 node_ready.go:57] node "addons-028052" has "Ready":"False" status (will retry)
	I1210 05:45:07.576610   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:07.576745   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:07.791420   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:08.017950   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:08.075965   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:08.076024   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:08.291712   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:08.518396   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:08.576313   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:08.576325   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:08.791833   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:09.018715   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:09.075683   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:09.075828   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:09.291456   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:09.517819   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 05:45:09.540272   14122 node_ready.go:57] node "addons-028052" has "Ready":"False" status (will retry)
	I1210 05:45:09.575691   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:09.575877   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:09.791703   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:10.018759   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:10.075915   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:10.075985   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:10.291570   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:10.518335   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:10.576488   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:10.576653   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:10.791207   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:11.018526   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:11.075424   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:11.075457   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:11.291633   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:11.518331   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:11.576066   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:11.576268   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:11.791362   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:12.018178   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 05:45:12.039623   14122 node_ready.go:57] node "addons-028052" has "Ready":"False" status (will retry)
	I1210 05:45:12.076307   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:12.076558   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:12.292060   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:12.517990   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:12.576072   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:12.576248   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:12.791934   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:13.018578   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:13.075564   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:13.075719   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:13.291337   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:13.517774   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:13.575900   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:13.576067   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:13.791681   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:14.018451   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 05:45:14.039874   14122 node_ready.go:57] node "addons-028052" has "Ready":"False" status (will retry)
	I1210 05:45:14.075624   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:14.075690   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:14.291236   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:14.517951   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:14.575766   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:14.575781   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:14.791777   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:15.018534   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:15.075576   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:15.075620   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:15.291269   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:15.517720   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:15.575624   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:15.575674   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:15.791311   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:16.017830   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 05:45:16.040025   14122 node_ready.go:57] node "addons-028052" has "Ready":"False" status (will retry)
	I1210 05:45:16.075541   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:16.075653   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:16.292191   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:16.517712   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:16.575485   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:16.575715   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:16.791193   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:17.017852   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:17.075980   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:17.076018   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:17.291557   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:17.518324   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:17.575532   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:17.575567   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:17.792167   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:18.018802   14122 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1210 05:45:18.018829   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:18.039658   14122 node_ready.go:49] node "addons-028052" is "Ready"
	I1210 05:45:18.039691   14122 node_ready.go:38] duration metric: took 40.502641864s for node "addons-028052" to be "Ready" ...
	I1210 05:45:18.039708   14122 api_server.go:52] waiting for apiserver process to appear ...
	I1210 05:45:18.039761   14122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:45:18.056952   14122 api_server.go:72] duration metric: took 41.140730527s to wait for apiserver process to appear ...
	I1210 05:45:18.056979   14122 api_server.go:88] waiting for apiserver healthz status ...
	I1210 05:45:18.057001   14122 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1210 05:45:18.062499   14122 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1210 05:45:18.063560   14122 api_server.go:141] control plane version: v1.34.2
	I1210 05:45:18.063607   14122 api_server.go:131] duration metric: took 6.618504ms to wait for apiserver health ...
	I1210 05:45:18.063619   14122 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 05:45:18.071935   14122 system_pods.go:59] 20 kube-system pods found
	I1210 05:45:18.071972   14122 system_pods.go:61] "amd-gpu-device-plugin-8nkkv" [b217b71d-a798-413e-b061-ddbeb921aa41] Pending
	I1210 05:45:18.071987   14122 system_pods.go:61] "coredns-66bc5c9577-rhtg8" [9967dafa-f0c9-4f91-ac48-ac57f6fdf9d4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 05:45:18.071992   14122 system_pods.go:61] "csi-hostpath-attacher-0" [273e2d3a-459c-4850-b160-c28f4960186e] Pending
	I1210 05:45:18.071998   14122 system_pods.go:61] "csi-hostpath-resizer-0" [6769a0cb-14fc-4d00-8c7d-66fa0447778b] Pending
	I1210 05:45:18.072004   14122 system_pods.go:61] "csi-hostpathplugin-8vnr8" [18a2714d-cf6e-42e5-a207-e5579e2cef92] Pending
	I1210 05:45:18.072009   14122 system_pods.go:61] "etcd-addons-028052" [154f2221-9ac1-4bd0-bc09-6beddc6c319d] Running
	I1210 05:45:18.072020   14122 system_pods.go:61] "kindnet-rvmds" [6d64ff3c-8220-4e32-a413-01c17f9e15f1] Running
	I1210 05:45:18.072028   14122 system_pods.go:61] "kube-apiserver-addons-028052" [fde1887d-6f28-4998-874b-4b4ab09b4e8c] Running
	I1210 05:45:18.072037   14122 system_pods.go:61] "kube-controller-manager-addons-028052" [81b5bf8e-98cf-4f8f-9eaf-64f1ce58774f] Running
	I1210 05:45:18.072046   14122 system_pods.go:61] "kube-ingress-dns-minikube" [76d2f5c4-191d-4a81-b811-659183a18624] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1210 05:45:18.072057   14122 system_pods.go:61] "kube-proxy-jrpnr" [4aef8104-61c3-48c2-8729-ee8680073a36] Running
	I1210 05:45:18.072063   14122 system_pods.go:61] "kube-scheduler-addons-028052" [9510b199-5cf3-4af0-b6d1-3d4de226f089] Running
	I1210 05:45:18.072072   14122 system_pods.go:61] "metrics-server-85b7d694d7-2mwh2" [dc1a0f7c-3439-4171-b8fe-ee86c125d8ee] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 05:45:18.072077   14122 system_pods.go:61] "nvidia-device-plugin-daemonset-n659m" [28d9824e-f8d8-4b30-8f85-dfcc1e1cdd63] Pending
	I1210 05:45:18.072082   14122 system_pods.go:61] "registry-6b586f9694-6cvjm" [f3e1613c-59b0-4d4e-9529-8f5b529027bb] Pending
	I1210 05:45:18.072087   14122 system_pods.go:61] "registry-creds-764b6fb674-zmx8t" [d4fbb573-287a-4093-afbe-313a0f4ca20b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1210 05:45:18.072095   14122 system_pods.go:61] "registry-proxy-kql6j" [82a3b310-71ed-4198-bba0-7ceeccfcaac0] Pending
	I1210 05:45:18.072106   14122 system_pods.go:61] "snapshot-controller-7d9fbc56b8-jptd2" [b6f577b8-eea1-4010-aa16-e038e8c88c79] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 05:45:18.072115   14122 system_pods.go:61] "snapshot-controller-7d9fbc56b8-vfr4b" [e1c84ba3-8bbf-49e1-88c9-a6589c8bd02c] Pending
	I1210 05:45:18.072123   14122 system_pods.go:61] "storage-provisioner" [30e21dab-7ac5-4f79-8d48-de67d0349344] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 05:45:18.072130   14122 system_pods.go:74] duration metric: took 8.504374ms to wait for pod list to return data ...
	I1210 05:45:18.072140   14122 default_sa.go:34] waiting for default service account to be created ...
	I1210 05:45:18.074152   14122 default_sa.go:45] found service account: "default"
	I1210 05:45:18.074175   14122 default_sa.go:55] duration metric: took 2.022645ms for default service account to be created ...
	I1210 05:45:18.074198   14122 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 05:45:18.078036   14122 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1210 05:45:18.078059   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:18.078718   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:18.079084   14122 system_pods.go:86] 20 kube-system pods found
	I1210 05:45:18.079103   14122 system_pods.go:89] "amd-gpu-device-plugin-8nkkv" [b217b71d-a798-413e-b061-ddbeb921aa41] Pending
	I1210 05:45:18.079110   14122 system_pods.go:89] "coredns-66bc5c9577-rhtg8" [9967dafa-f0c9-4f91-ac48-ac57f6fdf9d4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 05:45:18.079114   14122 system_pods.go:89] "csi-hostpath-attacher-0" [273e2d3a-459c-4850-b160-c28f4960186e] Pending
	I1210 05:45:18.079119   14122 system_pods.go:89] "csi-hostpath-resizer-0" [6769a0cb-14fc-4d00-8c7d-66fa0447778b] Pending
	I1210 05:45:18.079122   14122 system_pods.go:89] "csi-hostpathplugin-8vnr8" [18a2714d-cf6e-42e5-a207-e5579e2cef92] Pending
	I1210 05:45:18.079126   14122 system_pods.go:89] "etcd-addons-028052" [154f2221-9ac1-4bd0-bc09-6beddc6c319d] Running
	I1210 05:45:18.079129   14122 system_pods.go:89] "kindnet-rvmds" [6d64ff3c-8220-4e32-a413-01c17f9e15f1] Running
	I1210 05:45:18.079134   14122 system_pods.go:89] "kube-apiserver-addons-028052" [fde1887d-6f28-4998-874b-4b4ab09b4e8c] Running
	I1210 05:45:18.079137   14122 system_pods.go:89] "kube-controller-manager-addons-028052" [81b5bf8e-98cf-4f8f-9eaf-64f1ce58774f] Running
	I1210 05:45:18.079142   14122 system_pods.go:89] "kube-ingress-dns-minikube" [76d2f5c4-191d-4a81-b811-659183a18624] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1210 05:45:18.079145   14122 system_pods.go:89] "kube-proxy-jrpnr" [4aef8104-61c3-48c2-8729-ee8680073a36] Running
	I1210 05:45:18.079161   14122 system_pods.go:89] "kube-scheduler-addons-028052" [9510b199-5cf3-4af0-b6d1-3d4de226f089] Running
	I1210 05:45:18.079169   14122 system_pods.go:89] "metrics-server-85b7d694d7-2mwh2" [dc1a0f7c-3439-4171-b8fe-ee86c125d8ee] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 05:45:18.079172   14122 system_pods.go:89] "nvidia-device-plugin-daemonset-n659m" [28d9824e-f8d8-4b30-8f85-dfcc1e1cdd63] Pending
	I1210 05:45:18.079180   14122 system_pods.go:89] "registry-6b586f9694-6cvjm" [f3e1613c-59b0-4d4e-9529-8f5b529027bb] Pending
	I1210 05:45:18.079186   14122 system_pods.go:89] "registry-creds-764b6fb674-zmx8t" [d4fbb573-287a-4093-afbe-313a0f4ca20b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1210 05:45:18.079193   14122 system_pods.go:89] "registry-proxy-kql6j" [82a3b310-71ed-4198-bba0-7ceeccfcaac0] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1210 05:45:18.079201   14122 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jptd2" [b6f577b8-eea1-4010-aa16-e038e8c88c79] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 05:45:18.079210   14122 system_pods.go:89] "snapshot-controller-7d9fbc56b8-vfr4b" [e1c84ba3-8bbf-49e1-88c9-a6589c8bd02c] Pending
	I1210 05:45:18.079217   14122 system_pods.go:89] "storage-provisioner" [30e21dab-7ac5-4f79-8d48-de67d0349344] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 05:45:18.079233   14122 retry.go:31] will retry after 270.873681ms: missing components: kube-dns
	I1210 05:45:18.292121   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:18.396992   14122 system_pods.go:86] 20 kube-system pods found
	I1210 05:45:18.397039   14122 system_pods.go:89] "amd-gpu-device-plugin-8nkkv" [b217b71d-a798-413e-b061-ddbeb921aa41] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1210 05:45:18.397049   14122 system_pods.go:89] "coredns-66bc5c9577-rhtg8" [9967dafa-f0c9-4f91-ac48-ac57f6fdf9d4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 05:45:18.397058   14122 system_pods.go:89] "csi-hostpath-attacher-0" [273e2d3a-459c-4850-b160-c28f4960186e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1210 05:45:18.397074   14122 system_pods.go:89] "csi-hostpath-resizer-0" [6769a0cb-14fc-4d00-8c7d-66fa0447778b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1210 05:45:18.397083   14122 system_pods.go:89] "csi-hostpathplugin-8vnr8" [18a2714d-cf6e-42e5-a207-e5579e2cef92] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1210 05:45:18.397090   14122 system_pods.go:89] "etcd-addons-028052" [154f2221-9ac1-4bd0-bc09-6beddc6c319d] Running
	I1210 05:45:18.397097   14122 system_pods.go:89] "kindnet-rvmds" [6d64ff3c-8220-4e32-a413-01c17f9e15f1] Running
	I1210 05:45:18.397103   14122 system_pods.go:89] "kube-apiserver-addons-028052" [fde1887d-6f28-4998-874b-4b4ab09b4e8c] Running
	I1210 05:45:18.397109   14122 system_pods.go:89] "kube-controller-manager-addons-028052" [81b5bf8e-98cf-4f8f-9eaf-64f1ce58774f] Running
	I1210 05:45:18.397124   14122 system_pods.go:89] "kube-ingress-dns-minikube" [76d2f5c4-191d-4a81-b811-659183a18624] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1210 05:45:18.397130   14122 system_pods.go:89] "kube-proxy-jrpnr" [4aef8104-61c3-48c2-8729-ee8680073a36] Running
	I1210 05:45:18.397140   14122 system_pods.go:89] "kube-scheduler-addons-028052" [9510b199-5cf3-4af0-b6d1-3d4de226f089] Running
	I1210 05:45:18.397148   14122 system_pods.go:89] "metrics-server-85b7d694d7-2mwh2" [dc1a0f7c-3439-4171-b8fe-ee86c125d8ee] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 05:45:18.397158   14122 system_pods.go:89] "nvidia-device-plugin-daemonset-n659m" [28d9824e-f8d8-4b30-8f85-dfcc1e1cdd63] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1210 05:45:18.397170   14122 system_pods.go:89] "registry-6b586f9694-6cvjm" [f3e1613c-59b0-4d4e-9529-8f5b529027bb] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1210 05:45:18.397182   14122 system_pods.go:89] "registry-creds-764b6fb674-zmx8t" [d4fbb573-287a-4093-afbe-313a0f4ca20b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1210 05:45:18.397194   14122 system_pods.go:89] "registry-proxy-kql6j" [82a3b310-71ed-4198-bba0-7ceeccfcaac0] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1210 05:45:18.397202   14122 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jptd2" [b6f577b8-eea1-4010-aa16-e038e8c88c79] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 05:45:18.397212   14122 system_pods.go:89] "snapshot-controller-7d9fbc56b8-vfr4b" [e1c84ba3-8bbf-49e1-88c9-a6589c8bd02c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 05:45:18.397220   14122 system_pods.go:89] "storage-provisioner" [30e21dab-7ac5-4f79-8d48-de67d0349344] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 05:45:18.397238   14122 retry.go:31] will retry after 337.985151ms: missing components: kube-dns
	I1210 05:45:18.518860   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:18.575263   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:18.575355   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:18.739074   14122 system_pods.go:86] 20 kube-system pods found
	I1210 05:45:18.739113   14122 system_pods.go:89] "amd-gpu-device-plugin-8nkkv" [b217b71d-a798-413e-b061-ddbeb921aa41] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1210 05:45:18.739121   14122 system_pods.go:89] "coredns-66bc5c9577-rhtg8" [9967dafa-f0c9-4f91-ac48-ac57f6fdf9d4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 05:45:18.739127   14122 system_pods.go:89] "csi-hostpath-attacher-0" [273e2d3a-459c-4850-b160-c28f4960186e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1210 05:45:18.739133   14122 system_pods.go:89] "csi-hostpath-resizer-0" [6769a0cb-14fc-4d00-8c7d-66fa0447778b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1210 05:45:18.739138   14122 system_pods.go:89] "csi-hostpathplugin-8vnr8" [18a2714d-cf6e-42e5-a207-e5579e2cef92] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1210 05:45:18.739142   14122 system_pods.go:89] "etcd-addons-028052" [154f2221-9ac1-4bd0-bc09-6beddc6c319d] Running
	I1210 05:45:18.739148   14122 system_pods.go:89] "kindnet-rvmds" [6d64ff3c-8220-4e32-a413-01c17f9e15f1] Running
	I1210 05:45:18.739160   14122 system_pods.go:89] "kube-apiserver-addons-028052" [fde1887d-6f28-4998-874b-4b4ab09b4e8c] Running
	I1210 05:45:18.739176   14122 system_pods.go:89] "kube-controller-manager-addons-028052" [81b5bf8e-98cf-4f8f-9eaf-64f1ce58774f] Running
	I1210 05:45:18.739190   14122 system_pods.go:89] "kube-ingress-dns-minikube" [76d2f5c4-191d-4a81-b811-659183a18624] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1210 05:45:18.739196   14122 system_pods.go:89] "kube-proxy-jrpnr" [4aef8104-61c3-48c2-8729-ee8680073a36] Running
	I1210 05:45:18.739205   14122 system_pods.go:89] "kube-scheduler-addons-028052" [9510b199-5cf3-4af0-b6d1-3d4de226f089] Running
	I1210 05:45:18.739212   14122 system_pods.go:89] "metrics-server-85b7d694d7-2mwh2" [dc1a0f7c-3439-4171-b8fe-ee86c125d8ee] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 05:45:18.739235   14122 system_pods.go:89] "nvidia-device-plugin-daemonset-n659m" [28d9824e-f8d8-4b30-8f85-dfcc1e1cdd63] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1210 05:45:18.739245   14122 system_pods.go:89] "registry-6b586f9694-6cvjm" [f3e1613c-59b0-4d4e-9529-8f5b529027bb] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1210 05:45:18.739256   14122 system_pods.go:89] "registry-creds-764b6fb674-zmx8t" [d4fbb573-287a-4093-afbe-313a0f4ca20b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1210 05:45:18.739265   14122 system_pods.go:89] "registry-proxy-kql6j" [82a3b310-71ed-4198-bba0-7ceeccfcaac0] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1210 05:45:18.739277   14122 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jptd2" [b6f577b8-eea1-4010-aa16-e038e8c88c79] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 05:45:18.739286   14122 system_pods.go:89] "snapshot-controller-7d9fbc56b8-vfr4b" [e1c84ba3-8bbf-49e1-88c9-a6589c8bd02c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 05:45:18.739295   14122 system_pods.go:89] "storage-provisioner" [30e21dab-7ac5-4f79-8d48-de67d0349344] Running
	I1210 05:45:18.739314   14122 retry.go:31] will retry after 419.508515ms: missing components: kube-dns
	I1210 05:45:18.838272   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:19.019504   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:19.076153   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:19.076216   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:19.162927   14122 system_pods.go:86] 20 kube-system pods found
	I1210 05:45:19.162962   14122 system_pods.go:89] "amd-gpu-device-plugin-8nkkv" [b217b71d-a798-413e-b061-ddbeb921aa41] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1210 05:45:19.162979   14122 system_pods.go:89] "coredns-66bc5c9577-rhtg8" [9967dafa-f0c9-4f91-ac48-ac57f6fdf9d4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 05:45:19.162988   14122 system_pods.go:89] "csi-hostpath-attacher-0" [273e2d3a-459c-4850-b160-c28f4960186e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1210 05:45:19.162996   14122 system_pods.go:89] "csi-hostpath-resizer-0" [6769a0cb-14fc-4d00-8c7d-66fa0447778b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1210 05:45:19.163007   14122 system_pods.go:89] "csi-hostpathplugin-8vnr8" [18a2714d-cf6e-42e5-a207-e5579e2cef92] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1210 05:45:19.163013   14122 system_pods.go:89] "etcd-addons-028052" [154f2221-9ac1-4bd0-bc09-6beddc6c319d] Running
	I1210 05:45:19.163020   14122 system_pods.go:89] "kindnet-rvmds" [6d64ff3c-8220-4e32-a413-01c17f9e15f1] Running
	I1210 05:45:19.163026   14122 system_pods.go:89] "kube-apiserver-addons-028052" [fde1887d-6f28-4998-874b-4b4ab09b4e8c] Running
	I1210 05:45:19.163035   14122 system_pods.go:89] "kube-controller-manager-addons-028052" [81b5bf8e-98cf-4f8f-9eaf-64f1ce58774f] Running
	I1210 05:45:19.163044   14122 system_pods.go:89] "kube-ingress-dns-minikube" [76d2f5c4-191d-4a81-b811-659183a18624] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1210 05:45:19.163051   14122 system_pods.go:89] "kube-proxy-jrpnr" [4aef8104-61c3-48c2-8729-ee8680073a36] Running
	I1210 05:45:19.163058   14122 system_pods.go:89] "kube-scheduler-addons-028052" [9510b199-5cf3-4af0-b6d1-3d4de226f089] Running
	I1210 05:45:19.163067   14122 system_pods.go:89] "metrics-server-85b7d694d7-2mwh2" [dc1a0f7c-3439-4171-b8fe-ee86c125d8ee] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 05:45:19.163076   14122 system_pods.go:89] "nvidia-device-plugin-daemonset-n659m" [28d9824e-f8d8-4b30-8f85-dfcc1e1cdd63] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1210 05:45:19.163089   14122 system_pods.go:89] "registry-6b586f9694-6cvjm" [f3e1613c-59b0-4d4e-9529-8f5b529027bb] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1210 05:45:19.163097   14122 system_pods.go:89] "registry-creds-764b6fb674-zmx8t" [d4fbb573-287a-4093-afbe-313a0f4ca20b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1210 05:45:19.163105   14122 system_pods.go:89] "registry-proxy-kql6j" [82a3b310-71ed-4198-bba0-7ceeccfcaac0] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1210 05:45:19.163113   14122 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jptd2" [b6f577b8-eea1-4010-aa16-e038e8c88c79] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 05:45:19.163125   14122 system_pods.go:89] "snapshot-controller-7d9fbc56b8-vfr4b" [e1c84ba3-8bbf-49e1-88c9-a6589c8bd02c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 05:45:19.163133   14122 system_pods.go:89] "storage-provisioner" [30e21dab-7ac5-4f79-8d48-de67d0349344] Running
	I1210 05:45:19.163152   14122 retry.go:31] will retry after 543.949488ms: missing components: kube-dns
	I1210 05:45:19.291907   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:19.520367   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:19.577423   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:19.577567   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:19.712676   14122 system_pods.go:86] 20 kube-system pods found
	I1210 05:45:19.712713   14122 system_pods.go:89] "amd-gpu-device-plugin-8nkkv" [b217b71d-a798-413e-b061-ddbeb921aa41] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1210 05:45:19.712722   14122 system_pods.go:89] "coredns-66bc5c9577-rhtg8" [9967dafa-f0c9-4f91-ac48-ac57f6fdf9d4] Running
	I1210 05:45:19.712732   14122 system_pods.go:89] "csi-hostpath-attacher-0" [273e2d3a-459c-4850-b160-c28f4960186e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1210 05:45:19.712740   14122 system_pods.go:89] "csi-hostpath-resizer-0" [6769a0cb-14fc-4d00-8c7d-66fa0447778b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1210 05:45:19.712748   14122 system_pods.go:89] "csi-hostpathplugin-8vnr8" [18a2714d-cf6e-42e5-a207-e5579e2cef92] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1210 05:45:19.712763   14122 system_pods.go:89] "etcd-addons-028052" [154f2221-9ac1-4bd0-bc09-6beddc6c319d] Running
	I1210 05:45:19.712770   14122 system_pods.go:89] "kindnet-rvmds" [6d64ff3c-8220-4e32-a413-01c17f9e15f1] Running
	I1210 05:45:19.712775   14122 system_pods.go:89] "kube-apiserver-addons-028052" [fde1887d-6f28-4998-874b-4b4ab09b4e8c] Running
	I1210 05:45:19.712781   14122 system_pods.go:89] "kube-controller-manager-addons-028052" [81b5bf8e-98cf-4f8f-9eaf-64f1ce58774f] Running
	I1210 05:45:19.712789   14122 system_pods.go:89] "kube-ingress-dns-minikube" [76d2f5c4-191d-4a81-b811-659183a18624] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1210 05:45:19.712803   14122 system_pods.go:89] "kube-proxy-jrpnr" [4aef8104-61c3-48c2-8729-ee8680073a36] Running
	I1210 05:45:19.712817   14122 system_pods.go:89] "kube-scheduler-addons-028052" [9510b199-5cf3-4af0-b6d1-3d4de226f089] Running
	I1210 05:45:19.712826   14122 system_pods.go:89] "metrics-server-85b7d694d7-2mwh2" [dc1a0f7c-3439-4171-b8fe-ee86c125d8ee] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 05:45:19.712834   14122 system_pods.go:89] "nvidia-device-plugin-daemonset-n659m" [28d9824e-f8d8-4b30-8f85-dfcc1e1cdd63] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1210 05:45:19.712843   14122 system_pods.go:89] "registry-6b586f9694-6cvjm" [f3e1613c-59b0-4d4e-9529-8f5b529027bb] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1210 05:45:19.712852   14122 system_pods.go:89] "registry-creds-764b6fb674-zmx8t" [d4fbb573-287a-4093-afbe-313a0f4ca20b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1210 05:45:19.712865   14122 system_pods.go:89] "registry-proxy-kql6j" [82a3b310-71ed-4198-bba0-7ceeccfcaac0] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1210 05:45:19.712875   14122 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jptd2" [b6f577b8-eea1-4010-aa16-e038e8c88c79] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 05:45:19.712887   14122 system_pods.go:89] "snapshot-controller-7d9fbc56b8-vfr4b" [e1c84ba3-8bbf-49e1-88c9-a6589c8bd02c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 05:45:19.712893   14122 system_pods.go:89] "storage-provisioner" [30e21dab-7ac5-4f79-8d48-de67d0349344] Running
	I1210 05:45:19.712907   14122 system_pods.go:126] duration metric: took 1.638702436s to wait for k8s-apps to be running ...
	I1210 05:45:19.712924   14122 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 05:45:19.712973   14122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 05:45:19.728961   14122 system_svc.go:56] duration metric: took 16.027252ms WaitForService to wait for kubelet
	I1210 05:45:19.728991   14122 kubeadm.go:587] duration metric: took 42.812773667s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 05:45:19.729014   14122 node_conditions.go:102] verifying NodePressure condition ...
	I1210 05:45:19.732120   14122 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1210 05:45:19.732151   14122 node_conditions.go:123] node cpu capacity is 8
	I1210 05:45:19.732171   14122 node_conditions.go:105] duration metric: took 3.151408ms to run NodePressure ...
	I1210 05:45:19.732185   14122 start.go:242] waiting for startup goroutines ...
	I1210 05:45:19.811928   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:20.019134   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:20.075654   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:20.075670   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:20.292797   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:20.519444   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:20.620247   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:20.620297   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:20.792060   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:21.019190   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:21.075765   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:21.075805   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:21.292666   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:21.519875   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:21.577754   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:21.578203   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:21.792796   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:22.019041   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:22.075846   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:22.075895   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:22.291671   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:22.521076   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:22.575838   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:22.575868   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:22.791708   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:23.019197   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:23.076062   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:23.076104   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:23.291757   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:23.519354   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:23.576190   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:23.576229   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:23.791968   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:24.019389   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:24.076345   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:24.076375   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:24.292610   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:24.519371   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:24.576070   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:24.576142   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:24.792176   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:25.019117   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:25.075788   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:25.075860   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:25.292896   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:25.519025   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:25.577176   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:25.577346   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:25.794000   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:26.018902   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:26.076685   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:26.076873   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:26.291646   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:26.518901   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:26.575353   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:26.575542   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:26.792967   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:27.019093   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:27.076651   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:27.076825   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:27.291081   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:27.519641   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:27.576511   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:27.576711   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:27.792790   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:28.019099   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:28.076268   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:28.076316   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:28.291966   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:28.522301   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:28.576063   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:28.576125   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:28.791875   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:29.019185   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:29.075854   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:29.076013   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:29.291923   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:29.519597   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:29.575613   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:29.575785   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:29.792059   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:30.019342   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:30.075924   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:30.075946   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:30.291964   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:30.584745   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:30.584921   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:30.585329   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:30.792571   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:31.018700   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:31.076794   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:31.077022   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:31.292058   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:31.519983   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:31.575957   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:31.575980   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:31.792216   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:32.019271   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:32.076312   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:32.076503   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:32.292837   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:32.520372   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:32.576289   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:32.576493   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:32.792925   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:33.019223   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:33.120236   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:33.120289   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:33.292145   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:33.518725   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:33.576758   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:33.576866   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:33.795286   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:34.068710   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:34.076171   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:34.076264   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:34.292106   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:34.518945   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:34.619329   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:34.619411   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:34.793129   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:35.018634   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:35.076059   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:35.076170   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:35.292936   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:35.519381   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:35.576026   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:35.576089   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:35.792053   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:36.019599   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:36.076718   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:36.076758   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:36.292713   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:36.519306   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:36.575628   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:36.575668   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:36.792682   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:37.018749   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:37.076219   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:37.076289   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:37.292064   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:37.519310   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:37.576114   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:37.576205   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:37.792757   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:38.021668   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:38.079443   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:38.079590   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:38.292160   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:38.518953   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:38.576611   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:38.576664   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:38.792972   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:39.018852   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:39.076435   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:39.076461   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:39.292193   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:39.518936   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:39.577012   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:39.577109   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:39.792020   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:40.018799   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:40.076016   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:40.076056   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:40.291746   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:40.519549   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:40.619718   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:40.619968   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:40.792183   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:41.018422   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:41.076374   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:41.076398   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 05:45:41.292582   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:41.520349   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:41.621039   14122 kapi.go:107] duration metric: took 1m3.048330906s to wait for kubernetes.io/minikube-addons=registry ...
	I1210 05:45:41.621430   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:41.792672   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:42.019851   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:42.076996   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:42.292920   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:42.519195   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:42.576338   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:42.805033   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:43.022210   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:43.114229   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:43.292270   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:43.519907   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:43.621051   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:43.791553   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:44.019047   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:44.076883   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:44.291675   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:44.519376   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:44.582826   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:44.791318   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:45.018509   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:45.076260   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:45.292274   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:45.518215   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:45.575974   14122 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 05:45:45.791979   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:46.020039   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:46.077509   14122 kapi.go:107] duration metric: took 1m7.504684566s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1210 05:45:46.292668   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:46.543803   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:46.792263   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:47.020391   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:47.292079   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:47.519151   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:47.792199   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:48.019415   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:48.292610   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:48.519212   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:48.792303   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:49.018675   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:49.291859   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:49.519649   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:49.793021   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:50.018733   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:50.291888   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:50.518887   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:50.792213   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:51.018348   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:51.292383   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:51.518556   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 05:45:51.793526   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:52.018500   14122 kapi.go:107] duration metric: took 1m13.003284005s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1210 05:45:52.292373   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:52.791324   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:53.292842   14122 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 05:45:53.791596   14122 kapi.go:107] duration metric: took 1m8.502911061s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1210 05:45:53.793268   14122 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-028052 cluster.
	I1210 05:45:53.794685   14122 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1210 05:45:53.796097   14122 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1210 05:45:53.797493   14122 out.go:179] * Enabled addons: registry-creds, nvidia-device-plugin, ingress-dns, cloud-spanner, inspektor-gadget, default-storageclass, amd-gpu-device-plugin, metrics-server, yakd, storage-provisioner, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I1210 05:45:53.798787   14122 addons.go:530] duration metric: took 1m16.882551499s for enable addons: enabled=[registry-creds nvidia-device-plugin ingress-dns cloud-spanner inspektor-gadget default-storageclass amd-gpu-device-plugin metrics-server yakd storage-provisioner storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I1210 05:45:53.798830   14122 start.go:247] waiting for cluster config update ...
	I1210 05:45:53.798854   14122 start.go:256] writing updated cluster config ...
	I1210 05:45:53.799094   14122 ssh_runner.go:195] Run: rm -f paused
	I1210 05:45:53.803058   14122 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 05:45:53.806139   14122 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-rhtg8" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:45:53.810245   14122 pod_ready.go:94] pod "coredns-66bc5c9577-rhtg8" is "Ready"
	I1210 05:45:53.810271   14122 pod_ready.go:86] duration metric: took 4.109842ms for pod "coredns-66bc5c9577-rhtg8" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:45:53.812437   14122 pod_ready.go:83] waiting for pod "etcd-addons-028052" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:45:53.816395   14122 pod_ready.go:94] pod "etcd-addons-028052" is "Ready"
	I1210 05:45:53.816419   14122 pod_ready.go:86] duration metric: took 3.961406ms for pod "etcd-addons-028052" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:45:53.818275   14122 pod_ready.go:83] waiting for pod "kube-apiserver-addons-028052" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:45:53.821756   14122 pod_ready.go:94] pod "kube-apiserver-addons-028052" is "Ready"
	I1210 05:45:53.821776   14122 pod_ready.go:86] duration metric: took 3.48167ms for pod "kube-apiserver-addons-028052" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:45:53.823590   14122 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-028052" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:45:54.206595   14122 pod_ready.go:94] pod "kube-controller-manager-addons-028052" is "Ready"
	I1210 05:45:54.206627   14122 pod_ready.go:86] duration metric: took 383.017978ms for pod "kube-controller-manager-addons-028052" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:45:54.406798   14122 pod_ready.go:83] waiting for pod "kube-proxy-jrpnr" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:45:54.806884   14122 pod_ready.go:94] pod "kube-proxy-jrpnr" is "Ready"
	I1210 05:45:54.806908   14122 pod_ready.go:86] duration metric: took 400.084395ms for pod "kube-proxy-jrpnr" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:45:55.006739   14122 pod_ready.go:83] waiting for pod "kube-scheduler-addons-028052" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:45:55.406772   14122 pod_ready.go:94] pod "kube-scheduler-addons-028052" is "Ready"
	I1210 05:45:55.406800   14122 pod_ready.go:86] duration metric: took 400.035752ms for pod "kube-scheduler-addons-028052" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 05:45:55.406812   14122 pod_ready.go:40] duration metric: took 1.60372617s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 05:45:55.450990   14122 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1210 05:45:55.453996   14122 out.go:179] * Done! kubectl is now configured to use "addons-028052" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 10 05:45:54 addons-028052 crio[774]: time="2025-12-10T05:45:54.627566995Z" level=info msg="Deleting pod gcp-auth_gcp-auth-certs-patch-rbqfb from CNI network \"kindnet\" (type=ptp)"
	Dec 10 05:45:54 addons-028052 crio[774]: time="2025-12-10T05:45:54.650202807Z" level=info msg="Stopped pod sandbox: 366e2ce5a7360ea31c86c359b31f57a2a45d245ee04f25a13f0464e23320b51d" id=7b3a1cbc-a426-47e3-ba07-3a92cb6246b7 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 10 05:45:56 addons-028052 crio[774]: time="2025-12-10T05:45:56.302791966Z" level=info msg="Running pod sandbox: default/busybox/POD" id=4a53616e-d801-4d93-b6ad-4f4a336cd6df name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 05:45:56 addons-028052 crio[774]: time="2025-12-10T05:45:56.302850379Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 05:45:56 addons-028052 crio[774]: time="2025-12-10T05:45:56.309268626Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:1aefb4fd761972f45cfa8d541d90133990fcd7a72004eb7ce3e003953d28671e UID:dddaa4c9-8f7c-4f58-876b-d749ce609491 NetNS:/var/run/netns/7ac12f69-d022-48f0-8de8-dd3b8ade7e24 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0001207f0}] Aliases:map[]}"
	Dec 10 05:45:56 addons-028052 crio[774]: time="2025-12-10T05:45:56.309311798Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 10 05:45:56 addons-028052 crio[774]: time="2025-12-10T05:45:56.319404933Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:1aefb4fd761972f45cfa8d541d90133990fcd7a72004eb7ce3e003953d28671e UID:dddaa4c9-8f7c-4f58-876b-d749ce609491 NetNS:/var/run/netns/7ac12f69-d022-48f0-8de8-dd3b8ade7e24 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0001207f0}] Aliases:map[]}"
	Dec 10 05:45:56 addons-028052 crio[774]: time="2025-12-10T05:45:56.31966346Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 10 05:45:56 addons-028052 crio[774]: time="2025-12-10T05:45:56.320613644Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 10 05:45:56 addons-028052 crio[774]: time="2025-12-10T05:45:56.321401172Z" level=info msg="Ran pod sandbox 1aefb4fd761972f45cfa8d541d90133990fcd7a72004eb7ce3e003953d28671e with infra container: default/busybox/POD" id=4a53616e-d801-4d93-b6ad-4f4a336cd6df name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 05:45:56 addons-028052 crio[774]: time="2025-12-10T05:45:56.322705248Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=df1b2f81-7eb9-4982-84ab-a75881999343 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 05:45:56 addons-028052 crio[774]: time="2025-12-10T05:45:56.322816779Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=df1b2f81-7eb9-4982-84ab-a75881999343 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 05:45:56 addons-028052 crio[774]: time="2025-12-10T05:45:56.322855888Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=df1b2f81-7eb9-4982-84ab-a75881999343 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 05:45:56 addons-028052 crio[774]: time="2025-12-10T05:45:56.323379475Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=fb44b7a7-f700-44f8-b648-56e92f74e3f3 name=/runtime.v1.ImageService/PullImage
	Dec 10 05:45:56 addons-028052 crio[774]: time="2025-12-10T05:45:56.324908098Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 10 05:45:57 addons-028052 crio[774]: time="2025-12-10T05:45:57.552018047Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=fb44b7a7-f700-44f8-b648-56e92f74e3f3 name=/runtime.v1.ImageService/PullImage
	Dec 10 05:45:57 addons-028052 crio[774]: time="2025-12-10T05:45:57.552570774Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=2e528f62-51c4-4c49-bd59-ec25b996255c name=/runtime.v1.ImageService/ImageStatus
	Dec 10 05:45:57 addons-028052 crio[774]: time="2025-12-10T05:45:57.553879955Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0b59ca40-e4e9-49b9-a27f-0bf4bf6f098f name=/runtime.v1.ImageService/ImageStatus
	Dec 10 05:45:57 addons-028052 crio[774]: time="2025-12-10T05:45:57.557652278Z" level=info msg="Creating container: default/busybox/busybox" id=c7fed4b4-35f9-4738-8b23-eb2c7f50199d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 05:45:57 addons-028052 crio[774]: time="2025-12-10T05:45:57.557787374Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 05:45:57 addons-028052 crio[774]: time="2025-12-10T05:45:57.563512964Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 05:45:57 addons-028052 crio[774]: time="2025-12-10T05:45:57.563977169Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 05:45:57 addons-028052 crio[774]: time="2025-12-10T05:45:57.603950099Z" level=info msg="Created container 9235598d48cf144a1c3a5472923f02ff33378fd62a06c6b9cdcd38aecbcaca21: default/busybox/busybox" id=c7fed4b4-35f9-4738-8b23-eb2c7f50199d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 05:45:57 addons-028052 crio[774]: time="2025-12-10T05:45:57.604614536Z" level=info msg="Starting container: 9235598d48cf144a1c3a5472923f02ff33378fd62a06c6b9cdcd38aecbcaca21" id=bb518442-9cb0-4c5e-8bb7-3cd2dd3cd845 name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 05:45:57 addons-028052 crio[774]: time="2025-12-10T05:45:57.606545539Z" level=info msg="Started container" PID=6205 containerID=9235598d48cf144a1c3a5472923f02ff33378fd62a06c6b9cdcd38aecbcaca21 description=default/busybox/busybox id=bb518442-9cb0-4c5e-8bb7-3cd2dd3cd845 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1aefb4fd761972f45cfa8d541d90133990fcd7a72004eb7ce3e003953d28671e
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	9235598d48cf1       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          7 seconds ago        Running             busybox                                  0                   1aefb4fd76197       busybox                                     default
	b6a538b373a67       a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e                                                                             11 seconds ago       Exited              patch                                    2                   366e2ce5a7360       gcp-auth-certs-patch-rbqfb                  gcp-auth
	1ab046fa4ded9       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 11 seconds ago       Running             gcp-auth                                 0                   3250592b20f40       gcp-auth-78565c9fb4-7rkqb                   gcp-auth
	16d883ea0cc67       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          13 seconds ago       Running             csi-snapshotter                          0                   338aeb5814900       csi-hostpathplugin-8vnr8                    kube-system
	736d6c57ec43c       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          14 seconds ago       Running             csi-provisioner                          0                   338aeb5814900       csi-hostpathplugin-8vnr8                    kube-system
	660e106c0ca88       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            15 seconds ago       Running             liveness-probe                           0                   338aeb5814900       csi-hostpathplugin-8vnr8                    kube-system
	b77860e4ca7d8       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           16 seconds ago       Running             hostpath                                 0                   338aeb5814900       csi-hostpathplugin-8vnr8                    kube-system
	ec45dc6fa552f       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:ea428be7b01d41418fca4d91ae3dff6b037bdc0d42757e7ad392a38536488a1a                            16 seconds ago       Running             gadget                                   0                   4177282b3fdfb       gadget-t97f6                                gadget
	15bdf91e47125       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                19 seconds ago       Running             node-driver-registrar                    0                   338aeb5814900       csi-hostpathplugin-8vnr8                    kube-system
	21f400e0a06c0       registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad                             19 seconds ago       Running             controller                               0                   70ea83e8528e7       ingress-nginx-controller-85d4c799dd-n2nrt   ingress-nginx
	b348e5c8e523a       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              23 seconds ago       Running             registry-proxy                           0                   1db5ee856e887       registry-proxy-kql6j                        kube-system
	03c1319ba40ad       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     25 seconds ago       Running             nvidia-device-plugin-ctr                 0                   68ec1d3bdaae1       nvidia-device-plugin-daemonset-n659m        kube-system
	30e7ebcfff065       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   28 seconds ago       Running             csi-external-health-monitor-controller   0                   338aeb5814900       csi-hostpathplugin-8vnr8                    kube-system
	6f8bbcfa5378e       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   29 seconds ago       Exited              create                                   0                   6d3e57e092bc7       gcp-auth-certs-create-dr5n6                 gcp-auth
	1f872b473fd2a       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      29 seconds ago       Running             volume-snapshot-controller               0                   89233a02cd793       snapshot-controller-7d9fbc56b8-jptd2        kube-system
	304fa9c779484       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     29 seconds ago       Running             amd-gpu-device-plugin                    0                   dfdc90a55197a       amd-gpu-device-plugin-8nkkv                 kube-system
	3d4ccc4d76ae4       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      30 seconds ago       Running             volume-snapshot-controller               0                   e12ad4f5ad312       snapshot-controller-7d9fbc56b8-vfr4b        kube-system
	a0bbf399c1145       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             31 seconds ago       Running             csi-attacher                             0                   6fb4ac04bd0a4       csi-hostpath-attacher-0                     kube-system
	5f58fcc00134e       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              32 seconds ago       Running             csi-resizer                              0                   df7b1fb2ab674       csi-hostpath-resizer-0                      kube-system
	0ff28f657acbf       a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e                                                                             33 seconds ago       Exited              patch                                    1                   fcc0359cc56e0       ingress-nginx-admission-patch-297k4         ingress-nginx
	f7b4692966d8d       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             33 seconds ago       Running             local-path-provisioner                   0                   6a2ab6d92867c       local-path-provisioner-648f6765c9-lhr8t     local-path-storage
	8acf6b3a51621       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   33 seconds ago       Exited              create                                   0                   8ecdd7139be6f       ingress-nginx-admission-create-z5xgg        ingress-nginx
	dec533b105023       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           35 seconds ago       Running             registry                                 0                   99f23fb5e38de       registry-6b586f9694-6cvjm                   kube-system
	64067205d65ee       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              36 seconds ago       Running             yakd                                     0                   31a763da0a796       yakd-dashboard-5ff678cb9-7tm8b              yakd-dashboard
	7c725f36dd3b4       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        38 seconds ago       Running             metrics-server                           0                   9e4d253065ff7       metrics-server-85b7d694d7-2mwh2             kube-system
	6ed5ed25f8d19       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               39 seconds ago       Running             minikube-ingress-dns                     0                   9201ddf3b7e41       kube-ingress-dns-minikube                   kube-system
	844cf81982783       gcr.io/cloud-spanner-emulator/emulator@sha256:22a4d5b0f97bd0c2ee20da342493c5a60e40b4d62ec20c174cb32ff4ee1f65bf                               43 seconds ago       Running             cloud-spanner-emulator                   0                   985dbf2e9cbca       cloud-spanner-emulator-5bdddb765-qb5vh      default
	9d1fa5291d10e       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             46 seconds ago       Running             coredns                                  0                   24cc0b5a03870       coredns-66bc5c9577-rhtg8                    kube-system
	58125e9bcfadd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             46 seconds ago       Running             storage-provisioner                      0                   b2a60f3d36b30       storage-provisioner                         kube-system
	fbc11ef328020       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             About a minute ago   Running             kindnet-cni                              0                   f2214b915976d       kindnet-rvmds                               kube-system
	9497319e6c1c1       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                                                             About a minute ago   Running             kube-proxy                               0                   1c4e7abfd2ce9       kube-proxy-jrpnr                            kube-system
	0122c6e10b651       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                                                             About a minute ago   Running             kube-scheduler                           0                   a140c8c7e5204       kube-scheduler-addons-028052                kube-system
	f1f5e9bce84f7       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                                                             About a minute ago   Running             kube-controller-manager                  0                   9042190ab2d70       kube-controller-manager-addons-028052       kube-system
	65e519df51c1d       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                                                             About a minute ago   Running             kube-apiserver                           0                   eb93d9eed3221       kube-apiserver-addons-028052                kube-system
	965d086a638c9       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                                             About a minute ago   Running             etcd                                     0                   6ae4a87efe1f6       etcd-addons-028052                          kube-system
	
	
	==> coredns [9d1fa5291d10e03a9903b7e6298d010ed5ca423741104638ae3883dcb6a99dce] <==
	[INFO] 10.244.0.19:56488 - 25594 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000236795s
	[INFO] 10.244.0.19:47553 - 27783 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000095341s
	[INFO] 10.244.0.19:47553 - 27467 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000115501s
	[INFO] 10.244.0.19:51039 - 62580 "AAAA IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,aa,rd,ra 198 0.000110757s
	[INFO] 10.244.0.19:51039 - 62273 "A IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,aa,rd,ra 198 0.000113509s
	[INFO] 10.244.0.19:56045 - 63737 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000066111s
	[INFO] 10.244.0.19:56045 - 64010 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000116007s
	[INFO] 10.244.0.19:45168 - 4856 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000061642s
	[INFO] 10.244.0.19:45168 - 4389 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.00006382s
	[INFO] 10.244.0.19:40302 - 10792 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000104361s
	[INFO] 10.244.0.19:40302 - 10557 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00014481s
	[INFO] 10.244.0.22:49599 - 50369 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000212763s
	[INFO] 10.244.0.22:60053 - 18887 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000291077s
	[INFO] 10.244.0.22:40006 - 1725 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000154233s
	[INFO] 10.244.0.22:59901 - 62028 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000170956s
	[INFO] 10.244.0.22:46739 - 23673 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000128558s
	[INFO] 10.244.0.22:54054 - 7752 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000160634s
	[INFO] 10.244.0.22:43035 - 46267 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.00893555s
	[INFO] 10.244.0.22:55920 - 54747 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.009294086s
	[INFO] 10.244.0.22:48051 - 57961 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004975862s
	[INFO] 10.244.0.22:48123 - 26820 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.00514599s
	[INFO] 10.244.0.22:34261 - 5022 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004381886s
	[INFO] 10.244.0.22:51690 - 13895 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.006358568s
	[INFO] 10.244.0.22:44085 - 19583 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00114011s
	[INFO] 10.244.0.22:50556 - 14900 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.003151752s
	
	
	==> describe nodes <==
	Name:               addons-028052
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-028052
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=edc6abd3c0573b88c7a02dc35aa0b985627fa3e9
	                    minikube.k8s.io/name=addons-028052
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T05_44_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-028052
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-028052"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 05:44:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-028052
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 05:46:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 05:46:03 +0000   Wed, 10 Dec 2025 05:44:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 05:46:03 +0000   Wed, 10 Dec 2025 05:44:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 05:46:03 +0000   Wed, 10 Dec 2025 05:44:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Dec 2025 05:46:03 +0000   Wed, 10 Dec 2025 05:45:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-028052
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 0992b7e47f4f804d2f02c3066938a460
	  System UUID:                395aad63-f01b-4f03-a5d4-f3c6cb3cd468
	  Boot ID:                    cce7104c-1270-4b6b-af66-b04ce0de633c
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  default                     cloud-spanner-emulator-5bdddb765-qb5vh       0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  gadget                      gadget-t97f6                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	  gcp-auth                    gcp-auth-78565c9fb4-7rkqb                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         79s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-n2nrt    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         86s
	  kube-system                 amd-gpu-device-plugin-8nkkv                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kube-system                 coredns-66bc5c9577-rhtg8                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     87s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 csi-hostpathplugin-8vnr8                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kube-system                 etcd-addons-028052                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         93s
	  kube-system                 kindnet-rvmds                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      88s
	  kube-system                 kube-apiserver-addons-028052                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 kube-controller-manager-addons-028052        200m (2%)     0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 kube-proxy-jrpnr                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 kube-scheduler-addons-028052                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 metrics-server-85b7d694d7-2mwh2              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         86s
	  kube-system                 nvidia-device-plugin-daemonset-n659m         0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kube-system                 registry-6b586f9694-6cvjm                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 registry-creds-764b6fb674-zmx8t              0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 registry-proxy-kql6j                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kube-system                 snapshot-controller-7d9fbc56b8-jptd2         0 (0%)        0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 snapshot-controller-7d9fbc56b8-vfr4b         0 (0%)        0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	  local-path-storage          local-path-provisioner-648f6765c9-lhr8t      0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-7tm8b               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     86s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 86s                kube-proxy       
	  Normal  Starting                 98s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  98s (x8 over 98s)  kubelet          Node addons-028052 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    98s (x8 over 98s)  kubelet          Node addons-028052 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     98s (x8 over 98s)  kubelet          Node addons-028052 status is now: NodeHasSufficientPID
	  Normal  Starting                 93s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  93s                kubelet          Node addons-028052 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    93s                kubelet          Node addons-028052 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     93s                kubelet          Node addons-028052 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           89s                node-controller  Node addons-028052 event: Registered Node addons-028052 in Controller
	  Normal  NodeReady                47s                kubelet          Node addons-028052 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec10 05:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001659] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001002] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.087011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.419736] i8042: Warning: Keylock active
	[  +0.012812] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.526931] block sda: the capability attribute has been deprecated.
	[  +0.099492] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028889] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.744944] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [965d086a638c9808f443b112af7fab37ce3c8230ef95960da97133283a174896] <==
	{"level":"warn","ts":"2025-12-10T05:44:28.389289Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:44:28.396158Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:44:28.413700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:44:28.426945Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:44:28.434202Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:44:28.443074Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:44:28.449595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:44:28.457252Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:44:28.464960Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:44:28.471890Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:44:28.478949Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:44:28.485548Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:44:28.493170Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:44:28.515498Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:44:28.523962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:44:28.530177Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:44:28.575049Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:44:39.386027Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:44:39.392600Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:45:05.958777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:45:05.965597Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:45:05.979899Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T05:45:05.986327Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57022","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-10T05:45:30.786536Z","caller":"traceutil/trace.go:172","msg":"trace[1424425179] transaction","detail":"{read_only:false; response_revision:1035; number_of_response:1; }","duration":"102.513356ms","start":"2025-12-10T05:45:30.684000Z","end":"2025-12-10T05:45:30.786513Z","steps":["trace[1424425179] 'process raft request'  (duration: 20.966769ms)","trace[1424425179] 'compare'  (duration: 81.403526ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-10T05:45:44.918842Z","caller":"traceutil/trace.go:172","msg":"trace[1076135398] transaction","detail":"{read_only:false; response_revision:1166; number_of_response:1; }","duration":"108.142692ms","start":"2025-12-10T05:45:44.810681Z","end":"2025-12-10T05:45:44.918824Z","steps":["trace[1076135398] 'process raft request'  (duration: 108.026303ms)"],"step_count":1}
	
	
	==> gcp-auth [1ab046fa4ded9f206820086fb67bbe704ab6d1f08a9650b1827d72e28261c43e] <==
	2025/12/10 05:45:53 GCP Auth Webhook started!
	2025/12/10 05:45:55 Ready to marshal response ...
	2025/12/10 05:45:55 Ready to write response ...
	2025/12/10 05:45:55 Ready to marshal response ...
	2025/12/10 05:45:55 Ready to write response ...
	2025/12/10 05:45:56 Ready to marshal response ...
	2025/12/10 05:45:56 Ready to write response ...
	
	
	==> kernel <==
	 05:46:05 up 28 min,  0 user,  load average: 1.26, 0.85, 0.35
	Linux addons-028052 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [fbc11ef328020e6f9cbad908c90e044d4bb674441630aabf78830e7d07ac1671] <==
	I1210 05:44:37.551729       1 main.go:148] setting mtu 1500 for CNI 
	I1210 05:44:37.612574       1 main.go:178] kindnetd IP family: "ipv4"
	I1210 05:44:37.612648       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-10T05:44:37Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1210 05:44:37.850981       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1210 05:44:37.851001       1 controller.go:381] "Waiting for informer caches to sync"
	I1210 05:44:37.851012       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1210 05:44:37.851153       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1210 05:45:07.851616       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1210 05:45:07.851616       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1210 05:45:07.851740       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1210 05:45:07.857013       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1210 05:45:09.351202       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1210 05:45:09.351238       1 metrics.go:72] Registering metrics
	I1210 05:45:09.351323       1 controller.go:711] "Syncing nftables rules"
	I1210 05:45:17.853780       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 05:45:17.853823       1 main.go:301] handling current node
	I1210 05:45:27.850558       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 05:45:27.850625       1 main.go:301] handling current node
	I1210 05:45:37.850494       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 05:45:37.850525       1 main.go:301] handling current node
	I1210 05:45:47.851579       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 05:45:47.851631       1 main.go:301] handling current node
	I1210 05:45:57.850529       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 05:45:57.850566       1 main.go:301] handling current node
	
	
	==> kube-apiserver [65e519df51c1d064d81c14c81e4eb34dfaf950890b576594d1ed96430518937a] <==
	W1210 05:45:38.503654       1 handler_proxy.go:99] no RequestInfo found in the context
	E1210 05:45:38.503654       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.173.142:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.173.142:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.173.142:443: connect: connection refused" logger="UnhandledError"
	E1210 05:45:38.503738       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1210 05:45:38.504251       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.173.142:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.173.142:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.173.142:443: connect: connection refused" logger="UnhandledError"
	W1210 05:45:39.078449       1 handler_proxy.go:99] no RequestInfo found in the context
	E1210 05:45:39.078510       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1210 05:45:39.078529       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1210 05:45:39.079590       1 handler_proxy.go:99] no RequestInfo found in the context
	E1210 05:45:39.079667       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1210 05:45:39.079681       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1210 05:45:43.513616       1 handler_proxy.go:99] no RequestInfo found in the context
	E1210 05:45:43.513669       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1210 05:45:43.513813       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.173.142:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.173.142:443/apis/metrics.k8s.io/v1beta1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	I1210 05:45:43.522893       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1210 05:46:03.144624       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:59606: use of closed network connection
	E1210 05:46:03.293326       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:59652: use of closed network connection
	
	
	==> kube-controller-manager [f1f5e9bce84f7b19972c44f0a37d275e958d15c03c9fc7f5cafd80b0328b7b15] <==
	I1210 05:44:35.942442       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="addons-028052"
	I1210 05:44:35.942496       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1210 05:44:35.942532       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1210 05:44:35.943694       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1210 05:44:35.943716       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1210 05:44:35.943749       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1210 05:44:35.943808       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1210 05:44:35.943811       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1210 05:44:35.944924       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1210 05:44:35.946749       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1210 05:44:35.947867       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 05:44:35.949048       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1210 05:44:35.958448       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1210 05:44:35.963763       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1210 05:44:38.126021       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1210 05:45:05.951744       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 05:45:05.951936       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1210 05:45:05.951985       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1210 05:45:05.970770       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1210 05:45:05.974493       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1210 05:45:06.052903       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 05:45:06.075305       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1210 05:45:20.949849       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1210 05:45:36.058497       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 05:45:36.082969       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [9497319e6c1c192902153d2ab92d489d5b12e5477a82f9c3e5dc7a7cb90e690d] <==
	I1210 05:44:37.331795       1 server_linux.go:53] "Using iptables proxy"
	I1210 05:44:37.540010       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1210 05:44:37.640762       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1210 05:44:37.643652       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1210 05:44:37.643783       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 05:44:37.917604       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1210 05:44:37.921346       1 server_linux.go:132] "Using iptables Proxier"
	I1210 05:44:37.982807       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 05:44:37.989266       1 server.go:527] "Version info" version="v1.34.2"
	I1210 05:44:37.995646       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 05:44:37.998084       1 config.go:200] "Starting service config controller"
	I1210 05:44:37.998173       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 05:44:37.998215       1 config.go:106] "Starting endpoint slice config controller"
	I1210 05:44:37.998261       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 05:44:37.998291       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 05:44:37.998313       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 05:44:37.998497       1 config.go:309] "Starting node config controller"
	I1210 05:44:37.998532       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 05:44:37.999543       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 05:44:38.102620       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1210 05:44:38.102750       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1210 05:44:38.101550       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [0122c6e10b651e471c57d0ec13f92f8bc142cb60e5d24dfbe157c9afb9176abb] <==
	E1210 05:44:28.986810       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1210 05:44:28.987577       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1210 05:44:28.987602       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1210 05:44:28.987807       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1210 05:44:28.987825       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1210 05:44:28.987945       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1210 05:44:28.987965       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1210 05:44:28.988003       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1210 05:44:28.988057       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1210 05:44:28.988107       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1210 05:44:28.988168       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1210 05:44:28.988270       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1210 05:44:28.988314       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1210 05:44:28.988804       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1210 05:44:28.988868       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1210 05:44:28.989117       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1210 05:44:29.961220       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1210 05:44:29.977379       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1210 05:44:30.009370       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1210 05:44:30.047862       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1210 05:44:30.068990       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1210 05:44:30.112288       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1210 05:44:30.204830       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1210 05:44:30.440348       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1210 05:44:32.385109       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 10 05:45:37 addons-028052 kubelet[1302]: I1210 05:45:37.696670    1302 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a330eff3-c391-4ba7-a022-904128e81dc9-kube-api-access-xxb6n" (OuterVolumeSpecName: "kube-api-access-xxb6n") pod "a330eff3-c391-4ba7-a022-904128e81dc9" (UID: "a330eff3-c391-4ba7-a022-904128e81dc9"). InnerVolumeSpecName "kube-api-access-xxb6n". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 10 05:45:37 addons-028052 kubelet[1302]: I1210 05:45:37.795522    1302 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xxb6n\" (UniqueName: \"kubernetes.io/projected/a330eff3-c391-4ba7-a022-904128e81dc9-kube-api-access-xxb6n\") on node \"addons-028052\" DevicePath \"\""
	Dec 10 05:45:38 addons-028052 kubelet[1302]: I1210 05:45:38.539053    1302 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d3e57e092bc7be1157cac784d41f53563341dc579481a337ad464079df737da"
	Dec 10 05:45:40 addons-028052 kubelet[1302]: I1210 05:45:40.546705    1302 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-n659m" secret="" err="secret \"gcp-auth\" not found"
	Dec 10 05:45:41 addons-028052 kubelet[1302]: I1210 05:45:41.556694    1302 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-kql6j" secret="" err="secret \"gcp-auth\" not found"
	Dec 10 05:45:41 addons-028052 kubelet[1302]: I1210 05:45:41.557229    1302 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-n659m" secret="" err="secret \"gcp-auth\" not found"
	Dec 10 05:45:41 addons-028052 kubelet[1302]: I1210 05:45:41.578822    1302 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/nvidia-device-plugin-daemonset-n659m" podStartSLOduration=3.4578301590000002 podStartE2EDuration="24.578800018s" podCreationTimestamp="2025-12-10 05:45:17 +0000 UTC" firstStartedPulling="2025-12-10 05:45:18.418182923 +0000 UTC m=+47.231114531" lastFinishedPulling="2025-12-10 05:45:39.539152791 +0000 UTC m=+68.352084390" observedRunningTime="2025-12-10 05:45:40.560330838 +0000 UTC m=+69.373262450" watchObservedRunningTime="2025-12-10 05:45:41.578800018 +0000 UTC m=+70.391731630"
	Dec 10 05:45:41 addons-028052 kubelet[1302]: I1210 05:45:41.579649    1302 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-proxy-kql6j" podStartSLOduration=1.827348575 podStartE2EDuration="24.579631323s" podCreationTimestamp="2025-12-10 05:45:17 +0000 UTC" firstStartedPulling="2025-12-10 05:45:18.502793328 +0000 UTC m=+47.315724922" lastFinishedPulling="2025-12-10 05:45:41.255076076 +0000 UTC m=+70.068007670" observedRunningTime="2025-12-10 05:45:41.575680194 +0000 UTC m=+70.388611804" watchObservedRunningTime="2025-12-10 05:45:41.579631323 +0000 UTC m=+70.392562940"
	Dec 10 05:45:42 addons-028052 kubelet[1302]: I1210 05:45:42.560148    1302 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-kql6j" secret="" err="secret \"gcp-auth\" not found"
	Dec 10 05:45:45 addons-028052 kubelet[1302]: I1210 05:45:45.590196    1302 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-85d4c799dd-n2nrt" podStartSLOduration=56.907245784 podStartE2EDuration="1m7.590166882s" podCreationTimestamp="2025-12-10 05:44:38 +0000 UTC" firstStartedPulling="2025-12-10 05:45:34.435878759 +0000 UTC m=+63.248810356" lastFinishedPulling="2025-12-10 05:45:45.11879986 +0000 UTC m=+73.931731454" observedRunningTime="2025-12-10 05:45:45.589171457 +0000 UTC m=+74.402103081" watchObservedRunningTime="2025-12-10 05:45:45.590166882 +0000 UTC m=+74.403098493"
	Dec 10 05:45:48 addons-028052 kubelet[1302]: I1210 05:45:48.603560    1302 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-t97f6" podStartSLOduration=66.621021275 podStartE2EDuration="1m10.603539989s" podCreationTimestamp="2025-12-10 05:44:38 +0000 UTC" firstStartedPulling="2025-12-10 05:45:43.850154745 +0000 UTC m=+72.663086335" lastFinishedPulling="2025-12-10 05:45:47.832673444 +0000 UTC m=+76.645605049" observedRunningTime="2025-12-10 05:45:48.603165266 +0000 UTC m=+77.416096892" watchObservedRunningTime="2025-12-10 05:45:48.603539989 +0000 UTC m=+77.416471603"
	Dec 10 05:45:49 addons-028052 kubelet[1302]: I1210 05:45:49.330773    1302 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Dec 10 05:45:49 addons-028052 kubelet[1302]: I1210 05:45:49.330825    1302 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Dec 10 05:45:49 addons-028052 kubelet[1302]: E1210 05:45:49.792081    1302 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Dec 10 05:45:49 addons-028052 kubelet[1302]: E1210 05:45:49.792215    1302 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d4fbb573-287a-4093-afbe-313a0f4ca20b-gcr-creds podName:d4fbb573-287a-4093-afbe-313a0f4ca20b nodeName:}" failed. No retries permitted until 2025-12-10 05:46:21.792187591 +0000 UTC m=+110.605119182 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/d4fbb573-287a-4093-afbe-313a0f4ca20b-gcr-creds") pod "registry-creds-764b6fb674-zmx8t" (UID: "d4fbb573-287a-4093-afbe-313a0f4ca20b") : secret "registry-creds-gcr" not found
	Dec 10 05:45:51 addons-028052 kubelet[1302]: I1210 05:45:51.624520    1302 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-8vnr8" podStartSLOduration=2.241354259 podStartE2EDuration="34.624499311s" podCreationTimestamp="2025-12-10 05:45:17 +0000 UTC" firstStartedPulling="2025-12-10 05:45:18.416973549 +0000 UTC m=+47.229905153" lastFinishedPulling="2025-12-10 05:45:50.800118611 +0000 UTC m=+79.613050205" observedRunningTime="2025-12-10 05:45:51.623631169 +0000 UTC m=+80.436562780" watchObservedRunningTime="2025-12-10 05:45:51.624499311 +0000 UTC m=+80.437430922"
	Dec 10 05:45:53 addons-028052 kubelet[1302]: I1210 05:45:53.278347    1302 scope.go:117] "RemoveContainer" containerID="89b814d409f514bddd9acdd1f72aa1efce48b7c7d52140d824acd9d09c617c3c"
	Dec 10 05:45:53 addons-028052 kubelet[1302]: I1210 05:45:53.621919    1302 scope.go:117] "RemoveContainer" containerID="89b814d409f514bddd9acdd1f72aa1efce48b7c7d52140d824acd9d09c617c3c"
	Dec 10 05:45:53 addons-028052 kubelet[1302]: I1210 05:45:53.632612    1302 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-7rkqb" podStartSLOduration=65.737958922 podStartE2EDuration="1m8.632590656s" podCreationTimestamp="2025-12-10 05:44:45 +0000 UTC" firstStartedPulling="2025-12-10 05:45:50.140527234 +0000 UTC m=+78.953458828" lastFinishedPulling="2025-12-10 05:45:53.035158969 +0000 UTC m=+81.848090562" observedRunningTime="2025-12-10 05:45:53.631603917 +0000 UTC m=+82.444535529" watchObservedRunningTime="2025-12-10 05:45:53.632590656 +0000 UTC m=+82.445522268"
	Dec 10 05:45:54 addons-028052 kubelet[1302]: I1210 05:45:54.726538    1302 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdj2c\" (UniqueName: \"kubernetes.io/projected/83d1b18c-109a-4404-950e-746e0afb8e09-kube-api-access-zdj2c\") pod \"83d1b18c-109a-4404-950e-746e0afb8e09\" (UID: \"83d1b18c-109a-4404-950e-746e0afb8e09\") "
	Dec 10 05:45:54 addons-028052 kubelet[1302]: I1210 05:45:54.729250    1302 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83d1b18c-109a-4404-950e-746e0afb8e09-kube-api-access-zdj2c" (OuterVolumeSpecName: "kube-api-access-zdj2c") pod "83d1b18c-109a-4404-950e-746e0afb8e09" (UID: "83d1b18c-109a-4404-950e-746e0afb8e09"). InnerVolumeSpecName "kube-api-access-zdj2c". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 10 05:45:54 addons-028052 kubelet[1302]: I1210 05:45:54.827392    1302 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zdj2c\" (UniqueName: \"kubernetes.io/projected/83d1b18c-109a-4404-950e-746e0afb8e09-kube-api-access-zdj2c\") on node \"addons-028052\" DevicePath \"\""
	Dec 10 05:45:55 addons-028052 kubelet[1302]: I1210 05:45:55.632680    1302 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="366e2ce5a7360ea31c86c359b31f57a2a45d245ee04f25a13f0464e23320b51d"
	Dec 10 05:45:56 addons-028052 kubelet[1302]: I1210 05:45:56.034772    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/dddaa4c9-8f7c-4f58-876b-d749ce609491-gcp-creds\") pod \"busybox\" (UID: \"dddaa4c9-8f7c-4f58-876b-d749ce609491\") " pod="default/busybox"
	Dec 10 05:45:56 addons-028052 kubelet[1302]: I1210 05:45:56.034828    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5f9k2\" (UniqueName: \"kubernetes.io/projected/dddaa4c9-8f7c-4f58-876b-d749ce609491-kube-api-access-5f9k2\") pod \"busybox\" (UID: \"dddaa4c9-8f7c-4f58-876b-d749ce609491\") " pod="default/busybox"
	
	
	==> storage-provisioner [58125e9bcfadd161d0334430d2e81b4b585bb9e189e3a652088e6fdbc00cdb98] <==
	W1210 05:45:40.681369       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:45:42.741393       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:45:42.804913       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:45:44.808738       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:45:44.920048       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:45:46.923564       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:45:46.928033       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:45:48.932065       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:45:48.936503       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:45:50.940017       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:45:50.943738       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:45:52.965965       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:45:53.001777       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:45:55.005173       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:45:55.010203       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:45:57.013722       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:45:57.018764       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:45:59.022160       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:45:59.027633       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:46:01.030591       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:46:01.034623       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:46:03.037595       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:46:03.041351       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:46:05.043941       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 05:46:05.048880       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-028052 -n addons-028052
helpers_test.go:270: (dbg) Run:  kubectl --context addons-028052 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: gcp-auth-certs-create-dr5n6 gcp-auth-certs-patch-rbqfb ingress-nginx-admission-create-z5xgg ingress-nginx-admission-patch-297k4 registry-creds-764b6fb674-zmx8t
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-028052 describe pod gcp-auth-certs-create-dr5n6 gcp-auth-certs-patch-rbqfb ingress-nginx-admission-create-z5xgg ingress-nginx-admission-patch-297k4 registry-creds-764b6fb674-zmx8t
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-028052 describe pod gcp-auth-certs-create-dr5n6 gcp-auth-certs-patch-rbqfb ingress-nginx-admission-create-z5xgg ingress-nginx-admission-patch-297k4 registry-creds-764b6fb674-zmx8t: exit status 1 (64.275019ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "gcp-auth-certs-create-dr5n6" not found
	Error from server (NotFound): pods "gcp-auth-certs-patch-rbqfb" not found
	Error from server (NotFound): pods "ingress-nginx-admission-create-z5xgg" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-297k4" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-zmx8t" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-028052 describe pod gcp-auth-certs-create-dr5n6 gcp-auth-certs-patch-rbqfb ingress-nginx-admission-create-z5xgg ingress-nginx-admission-patch-297k4 registry-creds-764b6fb674-zmx8t: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-028052 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-028052 addons disable headlamp --alsologtostderr -v=1: exit status 11 (249.568837ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 05:46:05.873729   23061 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:46:05.874010   23061 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:46:05.874020   23061 out.go:374] Setting ErrFile to fd 2...
	I1210 05:46:05.874024   23061 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:46:05.874254   23061 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8832/.minikube/bin
	I1210 05:46:05.874552   23061 mustload.go:66] Loading cluster: addons-028052
	I1210 05:46:05.874869   23061 config.go:182] Loaded profile config "addons-028052": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 05:46:05.874887   23061 addons.go:622] checking whether the cluster is paused
	I1210 05:46:05.874965   23061 config.go:182] Loaded profile config "addons-028052": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 05:46:05.874976   23061 host.go:66] Checking if "addons-028052" exists ...
	I1210 05:46:05.875316   23061 cli_runner.go:164] Run: docker container inspect addons-028052 --format={{.State.Status}}
	I1210 05:46:05.894136   23061 ssh_runner.go:195] Run: systemctl --version
	I1210 05:46:05.894197   23061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028052
	I1210 05:46:05.913222   23061 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/addons-028052/id_rsa Username:docker}
	I1210 05:46:06.007374   23061 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 05:46:06.007528   23061 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 05:46:06.040619   23061 cri.go:89] found id: "16d883ea0cc6779bde20ede57329324ccb3073fc4a4ace9d329105b630097e53"
	I1210 05:46:06.040643   23061 cri.go:89] found id: "736d6c57ec43c1049fc475cb75d66bd4e61af0f5fa34e42b665c70ba4390742c"
	I1210 05:46:06.040650   23061 cri.go:89] found id: "660e106c0ca888f87a50643d5adcd0d1151065c4341897cf2b65f1c18534f68f"
	I1210 05:46:06.040655   23061 cri.go:89] found id: "b77860e4ca7d8d9c02bcbed331e0cbb22323bb93c694b8969dae5e3caf82308b"
	I1210 05:46:06.040660   23061 cri.go:89] found id: "15bdf91e471254f93dee370bf1831f3912afc00e05382ad11815cbbab8f2e1d7"
	I1210 05:46:06.040664   23061 cri.go:89] found id: "b348e5c8e523a1f9eebbeccbb1a381248fcc876c68527ef07c501b958acbec62"
	I1210 05:46:06.040668   23061 cri.go:89] found id: "03c1319ba40adc6cc0c4630b22ba6b75c7514ebc2d7cf02eb7505833be94d7a7"
	I1210 05:46:06.040673   23061 cri.go:89] found id: "30e7ebcfff0650bcc7fdafd943ccd6f50a351909e0b9c33643660cfe8a925bfb"
	I1210 05:46:06.040678   23061 cri.go:89] found id: "1f872b473fd2ae84699c713f2ef8f124fd4fcdd418efbb37106de31bf37f116e"
	I1210 05:46:06.040689   23061 cri.go:89] found id: "304fa9c779484e5496a401ac38622fc781398b5378ffc456e3864b3d0825f120"
	I1210 05:46:06.040695   23061 cri.go:89] found id: "3d4ccc4d76ae4b3a4f2c820c2802b0218844b053079f83f8844177ffea9582be"
	I1210 05:46:06.040700   23061 cri.go:89] found id: "a0bbf399c11456bf767be1edadfa4ce06f450d80bdb74a4ff140d1658684ba30"
	I1210 05:46:06.040706   23061 cri.go:89] found id: "5f58fcc00134eb8d59a63529213019f5e50939e6fd4c584d6eff14ac2a6144e9"
	I1210 05:46:06.040716   23061 cri.go:89] found id: "dec533b105023287d9c5a2f8b2c9416ba56dda3bfc1421a5f53aab1805cf96be"
	I1210 05:46:06.040722   23061 cri.go:89] found id: "7c725f36dd3b4433100a50a43edc6ec082420363ce394e1342d7a178ca2f3ee5"
	I1210 05:46:06.040742   23061 cri.go:89] found id: "6ed5ed25f8d19e3ab10979fe0d41f814698164a6644627db3849c6e9209352d6"
	I1210 05:46:06.040750   23061 cri.go:89] found id: "9d1fa5291d10e03a9903b7e6298d010ed5ca423741104638ae3883dcb6a99dce"
	I1210 05:46:06.040758   23061 cri.go:89] found id: "58125e9bcfadd161d0334430d2e81b4b585bb9e189e3a652088e6fdbc00cdb98"
	I1210 05:46:06.040762   23061 cri.go:89] found id: "fbc11ef328020e6f9cbad908c90e044d4bb674441630aabf78830e7d07ac1671"
	I1210 05:46:06.040764   23061 cri.go:89] found id: "9497319e6c1c192902153d2ab92d489d5b12e5477a82f9c3e5dc7a7cb90e690d"
	I1210 05:46:06.040767   23061 cri.go:89] found id: "0122c6e10b651e471c57d0ec13f92f8bc142cb60e5d24dfbe157c9afb9176abb"
	I1210 05:46:06.040770   23061 cri.go:89] found id: "f1f5e9bce84f7b19972c44f0a37d275e958d15c03c9fc7f5cafd80b0328b7b15"
	I1210 05:46:06.040773   23061 cri.go:89] found id: "65e519df51c1d064d81c14c81e4eb34dfaf950890b576594d1ed96430518937a"
	I1210 05:46:06.040776   23061 cri.go:89] found id: "965d086a638c9808f443b112af7fab37ce3c8230ef95960da97133283a174896"
	I1210 05:46:06.040779   23061 cri.go:89] found id: ""
	I1210 05:46:06.040817   23061 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 05:46:06.055831   23061 out.go:203] 
	W1210 05:46:06.057385   23061 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T05:46:06Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T05:46:06Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 05:46:06.057413   23061 out.go:285] * 
	* 
	W1210 05:46:06.060293   23061 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 05:46:06.061755   23061 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable headlamp addon: args "out/minikube-linux-amd64 -p addons-028052 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (2.53s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.28s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5bdddb765-qb5vh" [4e295ac7-5879-4ab4-baf8-fba6924786bd] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.002672296s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-028052 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-028052 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (272.923537ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 05:46:13.877567   23679 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:46:13.877739   23679 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:46:13.877752   23679 out.go:374] Setting ErrFile to fd 2...
	I1210 05:46:13.877759   23679 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:46:13.878102   23679 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8832/.minikube/bin
	I1210 05:46:13.878517   23679 mustload.go:66] Loading cluster: addons-028052
	I1210 05:46:13.878996   23679 config.go:182] Loaded profile config "addons-028052": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 05:46:13.879023   23679 addons.go:622] checking whether the cluster is paused
	I1210 05:46:13.879114   23679 config.go:182] Loaded profile config "addons-028052": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 05:46:13.879127   23679 host.go:66] Checking if "addons-028052" exists ...
	I1210 05:46:13.879561   23679 cli_runner.go:164] Run: docker container inspect addons-028052 --format={{.State.Status}}
	I1210 05:46:13.903043   23679 ssh_runner.go:195] Run: systemctl --version
	I1210 05:46:13.903126   23679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028052
	I1210 05:46:13.925176   23679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/addons-028052/id_rsa Username:docker}
	I1210 05:46:14.022717   23679 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 05:46:14.022830   23679 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 05:46:14.054683   23679 cri.go:89] found id: "16d883ea0cc6779bde20ede57329324ccb3073fc4a4ace9d329105b630097e53"
	I1210 05:46:14.054706   23679 cri.go:89] found id: "736d6c57ec43c1049fc475cb75d66bd4e61af0f5fa34e42b665c70ba4390742c"
	I1210 05:46:14.054710   23679 cri.go:89] found id: "660e106c0ca888f87a50643d5adcd0d1151065c4341897cf2b65f1c18534f68f"
	I1210 05:46:14.054714   23679 cri.go:89] found id: "b77860e4ca7d8d9c02bcbed331e0cbb22323bb93c694b8969dae5e3caf82308b"
	I1210 05:46:14.054716   23679 cri.go:89] found id: "15bdf91e471254f93dee370bf1831f3912afc00e05382ad11815cbbab8f2e1d7"
	I1210 05:46:14.054725   23679 cri.go:89] found id: "b348e5c8e523a1f9eebbeccbb1a381248fcc876c68527ef07c501b958acbec62"
	I1210 05:46:14.054727   23679 cri.go:89] found id: "03c1319ba40adc6cc0c4630b22ba6b75c7514ebc2d7cf02eb7505833be94d7a7"
	I1210 05:46:14.054730   23679 cri.go:89] found id: "30e7ebcfff0650bcc7fdafd943ccd6f50a351909e0b9c33643660cfe8a925bfb"
	I1210 05:46:14.054733   23679 cri.go:89] found id: "1f872b473fd2ae84699c713f2ef8f124fd4fcdd418efbb37106de31bf37f116e"
	I1210 05:46:14.054739   23679 cri.go:89] found id: "304fa9c779484e5496a401ac38622fc781398b5378ffc456e3864b3d0825f120"
	I1210 05:46:14.054743   23679 cri.go:89] found id: "3d4ccc4d76ae4b3a4f2c820c2802b0218844b053079f83f8844177ffea9582be"
	I1210 05:46:14.054746   23679 cri.go:89] found id: "a0bbf399c11456bf767be1edadfa4ce06f450d80bdb74a4ff140d1658684ba30"
	I1210 05:46:14.054748   23679 cri.go:89] found id: "5f58fcc00134eb8d59a63529213019f5e50939e6fd4c584d6eff14ac2a6144e9"
	I1210 05:46:14.054752   23679 cri.go:89] found id: "dec533b105023287d9c5a2f8b2c9416ba56dda3bfc1421a5f53aab1805cf96be"
	I1210 05:46:14.054757   23679 cri.go:89] found id: "7c725f36dd3b4433100a50a43edc6ec082420363ce394e1342d7a178ca2f3ee5"
	I1210 05:46:14.054767   23679 cri.go:89] found id: "6ed5ed25f8d19e3ab10979fe0d41f814698164a6644627db3849c6e9209352d6"
	I1210 05:46:14.054771   23679 cri.go:89] found id: "9d1fa5291d10e03a9903b7e6298d010ed5ca423741104638ae3883dcb6a99dce"
	I1210 05:46:14.054776   23679 cri.go:89] found id: "58125e9bcfadd161d0334430d2e81b4b585bb9e189e3a652088e6fdbc00cdb98"
	I1210 05:46:14.054780   23679 cri.go:89] found id: "fbc11ef328020e6f9cbad908c90e044d4bb674441630aabf78830e7d07ac1671"
	I1210 05:46:14.054786   23679 cri.go:89] found id: "9497319e6c1c192902153d2ab92d489d5b12e5477a82f9c3e5dc7a7cb90e690d"
	I1210 05:46:14.054795   23679 cri.go:89] found id: "0122c6e10b651e471c57d0ec13f92f8bc142cb60e5d24dfbe157c9afb9176abb"
	I1210 05:46:14.054800   23679 cri.go:89] found id: "f1f5e9bce84f7b19972c44f0a37d275e958d15c03c9fc7f5cafd80b0328b7b15"
	I1210 05:46:14.054805   23679 cri.go:89] found id: "65e519df51c1d064d81c14c81e4eb34dfaf950890b576594d1ed96430518937a"
	I1210 05:46:14.054815   23679 cri.go:89] found id: "965d086a638c9808f443b112af7fab37ce3c8230ef95960da97133283a174896"
	I1210 05:46:14.054820   23679 cri.go:89] found id: ""
	I1210 05:46:14.054866   23679 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 05:46:14.072957   23679 out.go:203] 
	W1210 05:46:14.074580   23679 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T05:46:14Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T05:46:14Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 05:46:14.074608   23679 out.go:285] * 
	* 
	W1210 05:46:14.077483   23679 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 05:46:14.079126   23679 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 -p addons-028052 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.28s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.16s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-028052 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-028052 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-028052 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-028052 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-028052 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-028052 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-028052 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [b9b3d981-8d07-4b4a-b66e-d2a013f8e0ad] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [b9b3d981-8d07-4b4a-b66e-d2a013f8e0ad] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [b9b3d981-8d07-4b4a-b66e-d2a013f8e0ad] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003372894s
addons_test.go:969: (dbg) Run:  kubectl --context addons-028052 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-amd64 -p addons-028052 ssh "cat /opt/local-path-provisioner/pvc-73b92f44-a60e-4168-b0e4-db2e6a8f021c_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-028052 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-028052 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-028052 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-028052 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (268.35746ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 05:46:14.020493   23751 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:46:14.020677   23751 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:46:14.020689   23751 out.go:374] Setting ErrFile to fd 2...
	I1210 05:46:14.020696   23751 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:46:14.020935   23751 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8832/.minikube/bin
	I1210 05:46:14.021329   23751 mustload.go:66] Loading cluster: addons-028052
	I1210 05:46:14.021808   23751 config.go:182] Loaded profile config "addons-028052": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 05:46:14.021830   23751 addons.go:622] checking whether the cluster is paused
	I1210 05:46:14.021917   23751 config.go:182] Loaded profile config "addons-028052": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 05:46:14.021928   23751 host.go:66] Checking if "addons-028052" exists ...
	I1210 05:46:14.022264   23751 cli_runner.go:164] Run: docker container inspect addons-028052 --format={{.State.Status}}
	I1210 05:46:14.043071   23751 ssh_runner.go:195] Run: systemctl --version
	I1210 05:46:14.043127   23751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028052
	I1210 05:46:14.066794   23751 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/addons-028052/id_rsa Username:docker}
	I1210 05:46:14.163292   23751 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 05:46:14.163358   23751 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 05:46:14.196450   23751 cri.go:89] found id: "16d883ea0cc6779bde20ede57329324ccb3073fc4a4ace9d329105b630097e53"
	I1210 05:46:14.196518   23751 cri.go:89] found id: "736d6c57ec43c1049fc475cb75d66bd4e61af0f5fa34e42b665c70ba4390742c"
	I1210 05:46:14.196525   23751 cri.go:89] found id: "660e106c0ca888f87a50643d5adcd0d1151065c4341897cf2b65f1c18534f68f"
	I1210 05:46:14.196528   23751 cri.go:89] found id: "b77860e4ca7d8d9c02bcbed331e0cbb22323bb93c694b8969dae5e3caf82308b"
	I1210 05:46:14.196531   23751 cri.go:89] found id: "15bdf91e471254f93dee370bf1831f3912afc00e05382ad11815cbbab8f2e1d7"
	I1210 05:46:14.196534   23751 cri.go:89] found id: "b348e5c8e523a1f9eebbeccbb1a381248fcc876c68527ef07c501b958acbec62"
	I1210 05:46:14.196536   23751 cri.go:89] found id: "03c1319ba40adc6cc0c4630b22ba6b75c7514ebc2d7cf02eb7505833be94d7a7"
	I1210 05:46:14.196539   23751 cri.go:89] found id: "30e7ebcfff0650bcc7fdafd943ccd6f50a351909e0b9c33643660cfe8a925bfb"
	I1210 05:46:14.196542   23751 cri.go:89] found id: "1f872b473fd2ae84699c713f2ef8f124fd4fcdd418efbb37106de31bf37f116e"
	I1210 05:46:14.196554   23751 cri.go:89] found id: "304fa9c779484e5496a401ac38622fc781398b5378ffc456e3864b3d0825f120"
	I1210 05:46:14.196557   23751 cri.go:89] found id: "3d4ccc4d76ae4b3a4f2c820c2802b0218844b053079f83f8844177ffea9582be"
	I1210 05:46:14.196559   23751 cri.go:89] found id: "a0bbf399c11456bf767be1edadfa4ce06f450d80bdb74a4ff140d1658684ba30"
	I1210 05:46:14.196562   23751 cri.go:89] found id: "5f58fcc00134eb8d59a63529213019f5e50939e6fd4c584d6eff14ac2a6144e9"
	I1210 05:46:14.196564   23751 cri.go:89] found id: "dec533b105023287d9c5a2f8b2c9416ba56dda3bfc1421a5f53aab1805cf96be"
	I1210 05:46:14.196567   23751 cri.go:89] found id: "7c725f36dd3b4433100a50a43edc6ec082420363ce394e1342d7a178ca2f3ee5"
	I1210 05:46:14.196576   23751 cri.go:89] found id: "6ed5ed25f8d19e3ab10979fe0d41f814698164a6644627db3849c6e9209352d6"
	I1210 05:46:14.196584   23751 cri.go:89] found id: "9d1fa5291d10e03a9903b7e6298d010ed5ca423741104638ae3883dcb6a99dce"
	I1210 05:46:14.196590   23751 cri.go:89] found id: "58125e9bcfadd161d0334430d2e81b4b585bb9e189e3a652088e6fdbc00cdb98"
	I1210 05:46:14.196594   23751 cri.go:89] found id: "fbc11ef328020e6f9cbad908c90e044d4bb674441630aabf78830e7d07ac1671"
	I1210 05:46:14.196599   23751 cri.go:89] found id: "9497319e6c1c192902153d2ab92d489d5b12e5477a82f9c3e5dc7a7cb90e690d"
	I1210 05:46:14.196606   23751 cri.go:89] found id: "0122c6e10b651e471c57d0ec13f92f8bc142cb60e5d24dfbe157c9afb9176abb"
	I1210 05:46:14.196610   23751 cri.go:89] found id: "f1f5e9bce84f7b19972c44f0a37d275e958d15c03c9fc7f5cafd80b0328b7b15"
	I1210 05:46:14.196615   23751 cri.go:89] found id: "65e519df51c1d064d81c14c81e4eb34dfaf950890b576594d1ed96430518937a"
	I1210 05:46:14.196619   23751 cri.go:89] found id: "965d086a638c9808f443b112af7fab37ce3c8230ef95960da97133283a174896"
	I1210 05:46:14.196624   23751 cri.go:89] found id: ""
	I1210 05:46:14.196683   23751 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 05:46:14.218656   23751 out.go:203] 
	W1210 05:46:14.220419   23751 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T05:46:14Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T05:46:14Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 05:46:14.220454   23751 out.go:285] * 
	* 
	W1210 05:46:14.223908   23751 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 05:46:14.225531   23751 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-028052 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (8.16s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.27s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-n659m" [28d9824e-f8d8-4b30-8f85-dfcc1e1cdd63] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003457359s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-028052 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-028052 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (261.409031ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 05:46:08.600611   23258 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:46:08.600965   23258 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:46:08.600978   23258 out.go:374] Setting ErrFile to fd 2...
	I1210 05:46:08.600982   23258 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:46:08.601249   23258 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8832/.minikube/bin
	I1210 05:46:08.601660   23258 mustload.go:66] Loading cluster: addons-028052
	I1210 05:46:08.602176   23258 config.go:182] Loaded profile config "addons-028052": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 05:46:08.602205   23258 addons.go:622] checking whether the cluster is paused
	I1210 05:46:08.602334   23258 config.go:182] Loaded profile config "addons-028052": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 05:46:08.602365   23258 host.go:66] Checking if "addons-028052" exists ...
	I1210 05:46:08.602806   23258 cli_runner.go:164] Run: docker container inspect addons-028052 --format={{.State.Status}}
	I1210 05:46:08.623769   23258 ssh_runner.go:195] Run: systemctl --version
	I1210 05:46:08.623813   23258 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028052
	I1210 05:46:08.645972   23258 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/addons-028052/id_rsa Username:docker}
	I1210 05:46:08.743025   23258 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 05:46:08.743109   23258 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 05:46:08.775517   23258 cri.go:89] found id: "16d883ea0cc6779bde20ede57329324ccb3073fc4a4ace9d329105b630097e53"
	I1210 05:46:08.775543   23258 cri.go:89] found id: "736d6c57ec43c1049fc475cb75d66bd4e61af0f5fa34e42b665c70ba4390742c"
	I1210 05:46:08.775547   23258 cri.go:89] found id: "660e106c0ca888f87a50643d5adcd0d1151065c4341897cf2b65f1c18534f68f"
	I1210 05:46:08.775550   23258 cri.go:89] found id: "b77860e4ca7d8d9c02bcbed331e0cbb22323bb93c694b8969dae5e3caf82308b"
	I1210 05:46:08.775553   23258 cri.go:89] found id: "15bdf91e471254f93dee370bf1831f3912afc00e05382ad11815cbbab8f2e1d7"
	I1210 05:46:08.775557   23258 cri.go:89] found id: "b348e5c8e523a1f9eebbeccbb1a381248fcc876c68527ef07c501b958acbec62"
	I1210 05:46:08.775560   23258 cri.go:89] found id: "03c1319ba40adc6cc0c4630b22ba6b75c7514ebc2d7cf02eb7505833be94d7a7"
	I1210 05:46:08.775564   23258 cri.go:89] found id: "30e7ebcfff0650bcc7fdafd943ccd6f50a351909e0b9c33643660cfe8a925bfb"
	I1210 05:46:08.775569   23258 cri.go:89] found id: "1f872b473fd2ae84699c713f2ef8f124fd4fcdd418efbb37106de31bf37f116e"
	I1210 05:46:08.775584   23258 cri.go:89] found id: "304fa9c779484e5496a401ac38622fc781398b5378ffc456e3864b3d0825f120"
	I1210 05:46:08.775593   23258 cri.go:89] found id: "3d4ccc4d76ae4b3a4f2c820c2802b0218844b053079f83f8844177ffea9582be"
	I1210 05:46:08.775598   23258 cri.go:89] found id: "a0bbf399c11456bf767be1edadfa4ce06f450d80bdb74a4ff140d1658684ba30"
	I1210 05:46:08.775603   23258 cri.go:89] found id: "5f58fcc00134eb8d59a63529213019f5e50939e6fd4c584d6eff14ac2a6144e9"
	I1210 05:46:08.775607   23258 cri.go:89] found id: "dec533b105023287d9c5a2f8b2c9416ba56dda3bfc1421a5f53aab1805cf96be"
	I1210 05:46:08.775612   23258 cri.go:89] found id: "7c725f36dd3b4433100a50a43edc6ec082420363ce394e1342d7a178ca2f3ee5"
	I1210 05:46:08.775630   23258 cri.go:89] found id: "6ed5ed25f8d19e3ab10979fe0d41f814698164a6644627db3849c6e9209352d6"
	I1210 05:46:08.775639   23258 cri.go:89] found id: "9d1fa5291d10e03a9903b7e6298d010ed5ca423741104638ae3883dcb6a99dce"
	I1210 05:46:08.775644   23258 cri.go:89] found id: "58125e9bcfadd161d0334430d2e81b4b585bb9e189e3a652088e6fdbc00cdb98"
	I1210 05:46:08.775653   23258 cri.go:89] found id: "fbc11ef328020e6f9cbad908c90e044d4bb674441630aabf78830e7d07ac1671"
	I1210 05:46:08.775658   23258 cri.go:89] found id: "9497319e6c1c192902153d2ab92d489d5b12e5477a82f9c3e5dc7a7cb90e690d"
	I1210 05:46:08.775662   23258 cri.go:89] found id: "0122c6e10b651e471c57d0ec13f92f8bc142cb60e5d24dfbe157c9afb9176abb"
	I1210 05:46:08.775665   23258 cri.go:89] found id: "f1f5e9bce84f7b19972c44f0a37d275e958d15c03c9fc7f5cafd80b0328b7b15"
	I1210 05:46:08.775670   23258 cri.go:89] found id: "65e519df51c1d064d81c14c81e4eb34dfaf950890b576594d1ed96430518937a"
	I1210 05:46:08.775680   23258 cri.go:89] found id: "965d086a638c9808f443b112af7fab37ce3c8230ef95960da97133283a174896"
	I1210 05:46:08.775685   23258 cri.go:89] found id: ""
	I1210 05:46:08.775730   23258 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 05:46:08.790857   23258 out.go:203] 
	W1210 05:46:08.792630   23258 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T05:46:08Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T05:46:08Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 05:46:08.792651   23258 out.go:285] * 
	* 
	W1210 05:46:08.795614   23258 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 05:46:08.796841   23258 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-amd64 -p addons-028052 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.27s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.28s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-5ff678cb9-7tm8b" [4b811b22-1c61-45ce-af59-43ca9b97cc8a] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003605928s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-028052 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-028052 addons disable yakd --alsologtostderr -v=1: exit status 11 (275.373177ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 05:46:13.878124   23678 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:46:13.878488   23678 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:46:13.878501   23678 out.go:374] Setting ErrFile to fd 2...
	I1210 05:46:13.878508   23678 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:46:13.878823   23678 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8832/.minikube/bin
	I1210 05:46:13.879147   23678 mustload.go:66] Loading cluster: addons-028052
	I1210 05:46:13.879616   23678 config.go:182] Loaded profile config "addons-028052": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 05:46:13.879683   23678 addons.go:622] checking whether the cluster is paused
	I1210 05:46:13.879862   23678 config.go:182] Loaded profile config "addons-028052": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 05:46:13.879886   23678 host.go:66] Checking if "addons-028052" exists ...
	I1210 05:46:13.880430   23678 cli_runner.go:164] Run: docker container inspect addons-028052 --format={{.State.Status}}
	I1210 05:46:13.903487   23678 ssh_runner.go:195] Run: systemctl --version
	I1210 05:46:13.903535   23678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028052
	I1210 05:46:13.924823   23678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/addons-028052/id_rsa Username:docker}
	I1210 05:46:14.021146   23678 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 05:46:14.021212   23678 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 05:46:14.054658   23678 cri.go:89] found id: "16d883ea0cc6779bde20ede57329324ccb3073fc4a4ace9d329105b630097e53"
	I1210 05:46:14.054685   23678 cri.go:89] found id: "736d6c57ec43c1049fc475cb75d66bd4e61af0f5fa34e42b665c70ba4390742c"
	I1210 05:46:14.054691   23678 cri.go:89] found id: "660e106c0ca888f87a50643d5adcd0d1151065c4341897cf2b65f1c18534f68f"
	I1210 05:46:14.054695   23678 cri.go:89] found id: "b77860e4ca7d8d9c02bcbed331e0cbb22323bb93c694b8969dae5e3caf82308b"
	I1210 05:46:14.054700   23678 cri.go:89] found id: "15bdf91e471254f93dee370bf1831f3912afc00e05382ad11815cbbab8f2e1d7"
	I1210 05:46:14.054706   23678 cri.go:89] found id: "b348e5c8e523a1f9eebbeccbb1a381248fcc876c68527ef07c501b958acbec62"
	I1210 05:46:14.054711   23678 cri.go:89] found id: "03c1319ba40adc6cc0c4630b22ba6b75c7514ebc2d7cf02eb7505833be94d7a7"
	I1210 05:46:14.054716   23678 cri.go:89] found id: "30e7ebcfff0650bcc7fdafd943ccd6f50a351909e0b9c33643660cfe8a925bfb"
	I1210 05:46:14.054720   23678 cri.go:89] found id: "1f872b473fd2ae84699c713f2ef8f124fd4fcdd418efbb37106de31bf37f116e"
	I1210 05:46:14.054727   23678 cri.go:89] found id: "304fa9c779484e5496a401ac38622fc781398b5378ffc456e3864b3d0825f120"
	I1210 05:46:14.054732   23678 cri.go:89] found id: "3d4ccc4d76ae4b3a4f2c820c2802b0218844b053079f83f8844177ffea9582be"
	I1210 05:46:14.054737   23678 cri.go:89] found id: "a0bbf399c11456bf767be1edadfa4ce06f450d80bdb74a4ff140d1658684ba30"
	I1210 05:46:14.054742   23678 cri.go:89] found id: "5f58fcc00134eb8d59a63529213019f5e50939e6fd4c584d6eff14ac2a6144e9"
	I1210 05:46:14.054746   23678 cri.go:89] found id: "dec533b105023287d9c5a2f8b2c9416ba56dda3bfc1421a5f53aab1805cf96be"
	I1210 05:46:14.054751   23678 cri.go:89] found id: "7c725f36dd3b4433100a50a43edc6ec082420363ce394e1342d7a178ca2f3ee5"
	I1210 05:46:14.054763   23678 cri.go:89] found id: "6ed5ed25f8d19e3ab10979fe0d41f814698164a6644627db3849c6e9209352d6"
	I1210 05:46:14.054768   23678 cri.go:89] found id: "9d1fa5291d10e03a9903b7e6298d010ed5ca423741104638ae3883dcb6a99dce"
	I1210 05:46:14.054774   23678 cri.go:89] found id: "58125e9bcfadd161d0334430d2e81b4b585bb9e189e3a652088e6fdbc00cdb98"
	I1210 05:46:14.054779   23678 cri.go:89] found id: "fbc11ef328020e6f9cbad908c90e044d4bb674441630aabf78830e7d07ac1671"
	I1210 05:46:14.054783   23678 cri.go:89] found id: "9497319e6c1c192902153d2ab92d489d5b12e5477a82f9c3e5dc7a7cb90e690d"
	I1210 05:46:14.054788   23678 cri.go:89] found id: "0122c6e10b651e471c57d0ec13f92f8bc142cb60e5d24dfbe157c9afb9176abb"
	I1210 05:46:14.054792   23678 cri.go:89] found id: "f1f5e9bce84f7b19972c44f0a37d275e958d15c03c9fc7f5cafd80b0328b7b15"
	I1210 05:46:14.054797   23678 cri.go:89] found id: "65e519df51c1d064d81c14c81e4eb34dfaf950890b576594d1ed96430518937a"
	I1210 05:46:14.054801   23678 cri.go:89] found id: "965d086a638c9808f443b112af7fab37ce3c8230ef95960da97133283a174896"
	I1210 05:46:14.054806   23678 cri.go:89] found id: ""
	I1210 05:46:14.054862   23678 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 05:46:14.072181   23678 out.go:203] 
	W1210 05:46:14.073775   23678 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T05:46:14Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T05:46:14Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 05:46:14.073801   23678 out.go:285] * 
	* 
	W1210 05:46:14.076953   23678 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 05:46:14.078293   23678 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable yakd addon: args "out/minikube-linux-amd64 -p addons-028052 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (5.28s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.27s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1040: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:353: "amd-gpu-device-plugin-8nkkv" [b217b71d-a798-413e-b061-ddbeb921aa41] Running
addons_test.go:1040: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.003097946s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-028052 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-028052 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: exit status 11 (261.309272ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 05:46:08.600763   23257 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:46:08.601291   23257 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:46:08.601309   23257 out.go:374] Setting ErrFile to fd 2...
	I1210 05:46:08.601318   23257 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:46:08.601798   23257 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8832/.minikube/bin
	I1210 05:46:08.602228   23257 mustload.go:66] Loading cluster: addons-028052
	I1210 05:46:08.603038   23257 config.go:182] Loaded profile config "addons-028052": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 05:46:08.603067   23257 addons.go:622] checking whether the cluster is paused
	I1210 05:46:08.603182   23257 config.go:182] Loaded profile config "addons-028052": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 05:46:08.603198   23257 host.go:66] Checking if "addons-028052" exists ...
	I1210 05:46:08.603631   23257 cli_runner.go:164] Run: docker container inspect addons-028052 --format={{.State.Status}}
	I1210 05:46:08.623763   23257 ssh_runner.go:195] Run: systemctl --version
	I1210 05:46:08.623824   23257 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-028052
	I1210 05:46:08.645910   23257 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/addons-028052/id_rsa Username:docker}
	I1210 05:46:08.742463   23257 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 05:46:08.742583   23257 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 05:46:08.774547   23257 cri.go:89] found id: "16d883ea0cc6779bde20ede57329324ccb3073fc4a4ace9d329105b630097e53"
	I1210 05:46:08.774567   23257 cri.go:89] found id: "736d6c57ec43c1049fc475cb75d66bd4e61af0f5fa34e42b665c70ba4390742c"
	I1210 05:46:08.774570   23257 cri.go:89] found id: "660e106c0ca888f87a50643d5adcd0d1151065c4341897cf2b65f1c18534f68f"
	I1210 05:46:08.774574   23257 cri.go:89] found id: "b77860e4ca7d8d9c02bcbed331e0cbb22323bb93c694b8969dae5e3caf82308b"
	I1210 05:46:08.774577   23257 cri.go:89] found id: "15bdf91e471254f93dee370bf1831f3912afc00e05382ad11815cbbab8f2e1d7"
	I1210 05:46:08.774581   23257 cri.go:89] found id: "b348e5c8e523a1f9eebbeccbb1a381248fcc876c68527ef07c501b958acbec62"
	I1210 05:46:08.774584   23257 cri.go:89] found id: "03c1319ba40adc6cc0c4630b22ba6b75c7514ebc2d7cf02eb7505833be94d7a7"
	I1210 05:46:08.774586   23257 cri.go:89] found id: "30e7ebcfff0650bcc7fdafd943ccd6f50a351909e0b9c33643660cfe8a925bfb"
	I1210 05:46:08.774589   23257 cri.go:89] found id: "1f872b473fd2ae84699c713f2ef8f124fd4fcdd418efbb37106de31bf37f116e"
	I1210 05:46:08.774595   23257 cri.go:89] found id: "304fa9c779484e5496a401ac38622fc781398b5378ffc456e3864b3d0825f120"
	I1210 05:46:08.774597   23257 cri.go:89] found id: "3d4ccc4d76ae4b3a4f2c820c2802b0218844b053079f83f8844177ffea9582be"
	I1210 05:46:08.774601   23257 cri.go:89] found id: "a0bbf399c11456bf767be1edadfa4ce06f450d80bdb74a4ff140d1658684ba30"
	I1210 05:46:08.774603   23257 cri.go:89] found id: "5f58fcc00134eb8d59a63529213019f5e50939e6fd4c584d6eff14ac2a6144e9"
	I1210 05:46:08.774606   23257 cri.go:89] found id: "dec533b105023287d9c5a2f8b2c9416ba56dda3bfc1421a5f53aab1805cf96be"
	I1210 05:46:08.774609   23257 cri.go:89] found id: "7c725f36dd3b4433100a50a43edc6ec082420363ce394e1342d7a178ca2f3ee5"
	I1210 05:46:08.774613   23257 cri.go:89] found id: "6ed5ed25f8d19e3ab10979fe0d41f814698164a6644627db3849c6e9209352d6"
	I1210 05:46:08.774616   23257 cri.go:89] found id: "9d1fa5291d10e03a9903b7e6298d010ed5ca423741104638ae3883dcb6a99dce"
	I1210 05:46:08.774620   23257 cri.go:89] found id: "58125e9bcfadd161d0334430d2e81b4b585bb9e189e3a652088e6fdbc00cdb98"
	I1210 05:46:08.774623   23257 cri.go:89] found id: "fbc11ef328020e6f9cbad908c90e044d4bb674441630aabf78830e7d07ac1671"
	I1210 05:46:08.774625   23257 cri.go:89] found id: "9497319e6c1c192902153d2ab92d489d5b12e5477a82f9c3e5dc7a7cb90e690d"
	I1210 05:46:08.774628   23257 cri.go:89] found id: "0122c6e10b651e471c57d0ec13f92f8bc142cb60e5d24dfbe157c9afb9176abb"
	I1210 05:46:08.774631   23257 cri.go:89] found id: "f1f5e9bce84f7b19972c44f0a37d275e958d15c03c9fc7f5cafd80b0328b7b15"
	I1210 05:46:08.774633   23257 cri.go:89] found id: "65e519df51c1d064d81c14c81e4eb34dfaf950890b576594d1ed96430518937a"
	I1210 05:46:08.774636   23257 cri.go:89] found id: "965d086a638c9808f443b112af7fab37ce3c8230ef95960da97133283a174896"
	I1210 05:46:08.774638   23257 cri.go:89] found id: ""
	I1210 05:46:08.774675   23257 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 05:46:08.789864   23257 out.go:203] 
	W1210 05:46:08.791759   23257 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T05:46:08Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T05:46:08Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 05:46:08.791780   23257 out.go:285] * 
	* 
	W1210 05:46:08.795367   23257 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 05:46:08.796844   23257 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable amd-gpu-device-plugin addon: args "out/minikube-linux-amd64 -p addons-028052 addons disable amd-gpu-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/AmdGpuDevicePlugin (5.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (5.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 image load --daemon kicbase/echo-server:functional-237456 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-amd64 -p functional-237456 image load --daemon kicbase/echo-server:functional-237456 --alsologtostderr: (3.128209021s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 image ls
functional_test.go:466: (dbg) Done: out/minikube-linux-amd64 -p functional-237456 image ls: (2.279837021s)
functional_test.go:461: expected "kicbase/echo-server:functional-237456" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (5.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (4.56s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 image load --daemon kicbase/echo-server:functional-228089 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-amd64 -p functional-228089 image load --daemon kicbase/echo-server:functional-228089 --alsologtostderr: (2.302639984s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 image ls
functional_test.go:466: (dbg) Done: out/minikube-linux-amd64 -p functional-228089 image ls: (2.261961432s)
functional_test.go:461: expected "kicbase/echo-server:functional-228089" to be loaded into minikube but the image is not there
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (4.56s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.1s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-090860 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p json-output-090860 --output=json --user=testUser: exit status 80 (2.096482855s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1bc7de1d-7dd6-4b25-9b99-0a5e708941da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-090860 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"25c5d269-a60e-425a-be80-bb5a8a79bb9d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-10T06:03:45Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"61af8063-fa4c-469a-bbe4-d8cc366aeaae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 pause -p json-output-090860 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.10s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.76s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-090860 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 unpause -p json-output-090860 --output=json --user=testUser: exit status 80 (1.762530254s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"73df3bc8-c131-4fa9-8a89-833a1fc4c3f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-090860 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"d3ab853a-29a8-422f-9d90-e63689d80f5a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-10T06:03:47Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"ed06f1f0-706f-4682-b9af-f6a6cf12eb0f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 unpause -p json-output-090860 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.76s)

                                                
                                    
x
+
TestPause/serial/Pause (6.04s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-203121 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-203121 --alsologtostderr -v=5: exit status 80 (1.807637246s)

                                                
                                                
-- stdout --
	* Pausing node pause-203121 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 06:20:55.003011  261092 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:20:55.003106  261092 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:20:55.003112  261092 out.go:374] Setting ErrFile to fd 2...
	I1210 06:20:55.003116  261092 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:20:55.003298  261092 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8832/.minikube/bin
	I1210 06:20:55.003592  261092 out.go:368] Setting JSON to false
	I1210 06:20:55.003611  261092 mustload.go:66] Loading cluster: pause-203121
	I1210 06:20:55.003980  261092 config.go:182] Loaded profile config "pause-203121": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 06:20:55.004372  261092 cli_runner.go:164] Run: docker container inspect pause-203121 --format={{.State.Status}}
	I1210 06:20:55.024138  261092 host.go:66] Checking if "pause-203121" exists ...
	I1210 06:20:55.024499  261092 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:20:55.084771  261092 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:76 OomKillDisable:false NGoroutines:84 SystemTime:2025-12-10 06:20:55.074700147 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:20:55.085525  261092 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765151505-21409/minikube-v1.37.0-1765151505-21409-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765151505-21409-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-203121 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1210 06:20:55.087895  261092 out.go:179] * Pausing node pause-203121 ... 
	I1210 06:20:55.089504  261092 host.go:66] Checking if "pause-203121" exists ...
	I1210 06:20:55.089799  261092 ssh_runner.go:195] Run: systemctl --version
	I1210 06:20:55.089853  261092 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-203121
	I1210 06:20:55.110248  261092 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33054 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/pause-203121/id_rsa Username:docker}
	I1210 06:20:55.208553  261092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:20:55.221925  261092 pause.go:52] kubelet running: true
	I1210 06:20:55.222018  261092 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1210 06:20:55.370193  261092 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1210 06:20:55.370307  261092 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1210 06:20:55.446242  261092 cri.go:89] found id: "cf62f9e28d4439c6626f971c222c28ef61e7c99dca09cee86fc50eb02f1f11e7"
	I1210 06:20:55.446270  261092 cri.go:89] found id: "e00866f864193cb02d7fa4e6e4fdbc6ad01fdffb3408406ad2b0a2f2ca7546ab"
	I1210 06:20:55.446276  261092 cri.go:89] found id: "4324a96acbf26610fa24d25a6b10deeebb9cddb7fb94f5dfde55488050951f4c"
	I1210 06:20:55.446281  261092 cri.go:89] found id: "b0ef753ac71a3588946b11e2247d60114c2ada8b6472fa9fe506e1f8d9b2576a"
	I1210 06:20:55.446285  261092 cri.go:89] found id: "6f2d0d213957beac3c690eeacb3151c1192c461d8284e6a53b4cfecdd4a17add"
	I1210 06:20:55.446288  261092 cri.go:89] found id: "f1f8b92df9fd1da6da75299621207a74d1d2035f97ce2dd8c961fcf715a4e7ec"
	I1210 06:20:55.446291  261092 cri.go:89] found id: "9fed510c4454cb11f751b00c6dc02a48e1bb122a804caf714f4cbeae72fd6a05"
	I1210 06:20:55.446294  261092 cri.go:89] found id: ""
	I1210 06:20:55.446340  261092 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 06:20:55.460787  261092 retry.go:31] will retry after 308.037643ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:20:55Z" level=error msg="open /run/runc: no such file or directory"
	I1210 06:20:55.769261  261092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:20:55.784044  261092 pause.go:52] kubelet running: false
	I1210 06:20:55.784104  261092 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1210 06:20:55.907384  261092 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1210 06:20:55.907485  261092 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1210 06:20:55.978228  261092 cri.go:89] found id: "cf62f9e28d4439c6626f971c222c28ef61e7c99dca09cee86fc50eb02f1f11e7"
	I1210 06:20:55.978250  261092 cri.go:89] found id: "e00866f864193cb02d7fa4e6e4fdbc6ad01fdffb3408406ad2b0a2f2ca7546ab"
	I1210 06:20:55.978254  261092 cri.go:89] found id: "4324a96acbf26610fa24d25a6b10deeebb9cddb7fb94f5dfde55488050951f4c"
	I1210 06:20:55.978257  261092 cri.go:89] found id: "b0ef753ac71a3588946b11e2247d60114c2ada8b6472fa9fe506e1f8d9b2576a"
	I1210 06:20:55.978260  261092 cri.go:89] found id: "6f2d0d213957beac3c690eeacb3151c1192c461d8284e6a53b4cfecdd4a17add"
	I1210 06:20:55.978263  261092 cri.go:89] found id: "f1f8b92df9fd1da6da75299621207a74d1d2035f97ce2dd8c961fcf715a4e7ec"
	I1210 06:20:55.978266  261092 cri.go:89] found id: "9fed510c4454cb11f751b00c6dc02a48e1bb122a804caf714f4cbeae72fd6a05"
	I1210 06:20:55.978268  261092 cri.go:89] found id: ""
	I1210 06:20:55.978308  261092 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 06:20:55.990433  261092 retry.go:31] will retry after 400.273965ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:20:55Z" level=error msg="open /run/runc: no such file or directory"
	I1210 06:20:56.391835  261092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:20:56.412854  261092 pause.go:52] kubelet running: false
	I1210 06:20:56.412923  261092 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1210 06:20:56.610163  261092 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1210 06:20:56.610351  261092 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1210 06:20:56.710815  261092 cri.go:89] found id: "cf62f9e28d4439c6626f971c222c28ef61e7c99dca09cee86fc50eb02f1f11e7"
	I1210 06:20:56.710841  261092 cri.go:89] found id: "e00866f864193cb02d7fa4e6e4fdbc6ad01fdffb3408406ad2b0a2f2ca7546ab"
	I1210 06:20:56.710846  261092 cri.go:89] found id: "4324a96acbf26610fa24d25a6b10deeebb9cddb7fb94f5dfde55488050951f4c"
	I1210 06:20:56.710852  261092 cri.go:89] found id: "b0ef753ac71a3588946b11e2247d60114c2ada8b6472fa9fe506e1f8d9b2576a"
	I1210 06:20:56.710857  261092 cri.go:89] found id: "6f2d0d213957beac3c690eeacb3151c1192c461d8284e6a53b4cfecdd4a17add"
	I1210 06:20:56.710862  261092 cri.go:89] found id: "f1f8b92df9fd1da6da75299621207a74d1d2035f97ce2dd8c961fcf715a4e7ec"
	I1210 06:20:56.710866  261092 cri.go:89] found id: "9fed510c4454cb11f751b00c6dc02a48e1bb122a804caf714f4cbeae72fd6a05"
	I1210 06:20:56.710872  261092 cri.go:89] found id: ""
	I1210 06:20:56.710915  261092 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 06:20:56.730248  261092 out.go:203] 
	W1210 06:20:56.731742  261092 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:20:56Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:20:56Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 06:20:56.731767  261092 out.go:285] * 
	* 
	W1210 06:20:56.738850  261092 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 06:20:56.741918  261092 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-203121 --alsologtostderr -v=5" : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-203121
helpers_test.go:244: (dbg) docker inspect pause-203121:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c6c142fbb7f9e1e3e5b97c5ae6d40166f2e8184202f6506a13db0fb004e54f0e",
	        "Created": "2025-12-10T06:19:43.805927705Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 243199,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T06:19:43.871651642Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9dfcc37acf4d8ed51daae49d651516447e95ced4bb0b0783e8c53cb79a74f008",
	        "ResolvConfPath": "/var/lib/docker/containers/c6c142fbb7f9e1e3e5b97c5ae6d40166f2e8184202f6506a13db0fb004e54f0e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c6c142fbb7f9e1e3e5b97c5ae6d40166f2e8184202f6506a13db0fb004e54f0e/hostname",
	        "HostsPath": "/var/lib/docker/containers/c6c142fbb7f9e1e3e5b97c5ae6d40166f2e8184202f6506a13db0fb004e54f0e/hosts",
	        "LogPath": "/var/lib/docker/containers/c6c142fbb7f9e1e3e5b97c5ae6d40166f2e8184202f6506a13db0fb004e54f0e/c6c142fbb7f9e1e3e5b97c5ae6d40166f2e8184202f6506a13db0fb004e54f0e-json.log",
	        "Name": "/pause-203121",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-203121:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-203121",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c6c142fbb7f9e1e3e5b97c5ae6d40166f2e8184202f6506a13db0fb004e54f0e",
	                "LowerDir": "/var/lib/docker/overlay2/d66366d2ed37cc38ebba005085ae77de341be11633e7a0a8a693e0307af19d59-init/diff:/var/lib/docker/overlay2/5745aee6e8b05b3a4cc4ad6aee891df9d6438d830895f70bd2a764a976802708/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d66366d2ed37cc38ebba005085ae77de341be11633e7a0a8a693e0307af19d59/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d66366d2ed37cc38ebba005085ae77de341be11633e7a0a8a693e0307af19d59/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d66366d2ed37cc38ebba005085ae77de341be11633e7a0a8a693e0307af19d59/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-203121",
	                "Source": "/var/lib/docker/volumes/pause-203121/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-203121",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-203121",
	                "name.minikube.sigs.k8s.io": "pause-203121",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "c187516cefe69bb383bcdfa9002f0e2ac0f29d3b12db4ee9290faf735425067a",
	            "SandboxKey": "/var/run/docker/netns/c187516cefe6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33054"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33055"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33058"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33056"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33057"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-203121": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e492d46712ecdcb5aaa4dd1d5f297eefd549161039fd49ae01777e7980eb8128",
	                    "EndpointID": "c4057375f99a0d0e05bba9ae49c34b269f87c68d4a0ff6f812a197fc145e6a2b",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "36:7a:56:74:89:21",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-203121",
	                        "c6c142fbb7f9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-203121 -n pause-203121
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-203121 -n pause-203121: exit status 2 (408.875271ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p pause-203121 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p pause-203121 logs -n 25: (1.104051639s)
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                ARGS                                                                                │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ cert-options-088618 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                        │ cert-options-088618       │ jenkins │ v1.37.0 │ 10 Dec 25 06:18 UTC │ 10 Dec 25 06:18 UTC │
	│ ssh     │ -p cert-options-088618 -- sudo cat /etc/kubernetes/admin.conf                                                                                                      │ cert-options-088618       │ jenkins │ v1.37.0 │ 10 Dec 25 06:18 UTC │ 10 Dec 25 06:18 UTC │
	│ delete  │ -p cert-options-088618                                                                                                                                             │ cert-options-088618       │ jenkins │ v1.37.0 │ 10 Dec 25 06:18 UTC │ 10 Dec 25 06:18 UTC │
	│ start   │ -p running-upgrade-538113 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                                               │ running-upgrade-538113    │ jenkins │ v1.35.0 │ 10 Dec 25 06:18 UTC │ 10 Dec 25 06:19 UTC │
	│ start   │ -p kubernetes-upgrade-800617 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                  │ kubernetes-upgrade-800617 │ jenkins │ v1.37.0 │ 10 Dec 25 06:18 UTC │                     │
	│ start   │ -p kubernetes-upgrade-800617 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                    │ kubernetes-upgrade-800617 │ jenkins │ v1.37.0 │ 10 Dec 25 06:18 UTC │ 10 Dec 25 06:19 UTC │
	│ delete  │ -p kubernetes-upgrade-800617                                                                                                                                       │ kubernetes-upgrade-800617 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ start   │ -p stopped-upgrade-709856 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                                               │ stopped-upgrade-709856    │ jenkins │ v1.35.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ start   │ -p running-upgrade-538113 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                           │ running-upgrade-538113    │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ start   │ -p missing-upgrade-490462 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                           │ missing-upgrade-490462    │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ stop    │ stopped-upgrade-709856 stop                                                                                                                                        │ stopped-upgrade-709856    │ jenkins │ v1.35.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ start   │ -p stopped-upgrade-709856 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                           │ stopped-upgrade-709856    │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ delete  │ -p running-upgrade-538113                                                                                                                                          │ running-upgrade-538113    │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ start   │ -p pause-203121 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                                          │ pause-203121              │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:20 UTC │
	│ delete  │ -p stopped-upgrade-709856                                                                                                                                          │ stopped-upgrade-709856    │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ start   │ -p cert-expiration-936135 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                          │ cert-expiration-936135    │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:20 UTC │
	│ delete  │ -p missing-upgrade-490462                                                                                                                                          │ missing-upgrade-490462    │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ start   │ -p auto-201263 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                            │ auto-201263               │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:20 UTC │
	│ start   │ -p calico-201263 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio                             │ calico-201263             │ jenkins │ v1.37.0 │ 10 Dec 25 06:20 UTC │ 10 Dec 25 06:20 UTC │
	│ delete  │ -p cert-expiration-936135                                                                                                                                          │ cert-expiration-936135    │ jenkins │ v1.37.0 │ 10 Dec 25 06:20 UTC │ 10 Dec 25 06:20 UTC │
	│ start   │ -p custom-flannel-201263 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio │ custom-flannel-201263     │ jenkins │ v1.37.0 │ 10 Dec 25 06:20 UTC │                     │
	│ ssh     │ -p auto-201263 pgrep -a kubelet                                                                                                                                    │ auto-201263               │ jenkins │ v1.37.0 │ 10 Dec 25 06:20 UTC │ 10 Dec 25 06:20 UTC │
	│ start   │ -p pause-203121 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                   │ pause-203121              │ jenkins │ v1.37.0 │ 10 Dec 25 06:20 UTC │ 10 Dec 25 06:20 UTC │
	│ pause   │ -p pause-203121 --alsologtostderr -v=5                                                                                                                             │ pause-203121              │ jenkins │ v1.37.0 │ 10 Dec 25 06:20 UTC │                     │
	│ ssh     │ -p calico-201263 pgrep -a kubelet                                                                                                                                  │ calico-201263             │ jenkins │ v1.37.0 │ 10 Dec 25 06:20 UTC │ 10 Dec 25 06:20 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:20:48
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:20:48.519285  259888 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:20:48.519417  259888 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:20:48.519429  259888 out.go:374] Setting ErrFile to fd 2...
	I1210 06:20:48.519435  259888 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:20:48.519662  259888 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8832/.minikube/bin
	I1210 06:20:48.520188  259888 out.go:368] Setting JSON to false
	I1210 06:20:48.521661  259888 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3800,"bootTime":1765343849,"procs":369,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 06:20:48.521724  259888 start.go:143] virtualization: kvm guest
	I1210 06:20:48.524198  259888 out.go:179] * [pause-203121] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 06:20:48.525797  259888 notify.go:221] Checking for updates...
	I1210 06:20:48.525826  259888 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 06:20:48.527873  259888 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:20:48.529454  259888 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-8832/kubeconfig
	I1210 06:20:48.530991  259888 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-8832/.minikube
	I1210 06:20:48.532426  259888 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 06:20:48.533892  259888 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:20:48.536047  259888 config.go:182] Loaded profile config "pause-203121": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 06:20:48.536752  259888 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:20:48.563635  259888 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 06:20:48.563720  259888 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:20:48.626860  259888 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:84 SystemTime:2025-12-10 06:20:48.613771919 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:20:48.627016  259888 docker.go:319] overlay module found
	I1210 06:20:48.628922  259888 out.go:179] * Using the docker driver based on existing profile
	I1210 06:20:48.630221  259888 start.go:309] selected driver: docker
	I1210 06:20:48.630236  259888 start.go:927] validating driver "docker" against &{Name:pause-203121 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-203121 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false reg
istry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:20:48.630357  259888 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:20:48.630438  259888 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:20:48.685332  259888 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:76 OomKillDisable:false NGoroutines:84 SystemTime:2025-12-10 06:20:48.675301563 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:20:48.686057  259888 cni.go:84] Creating CNI manager for ""
	I1210 06:20:48.686137  259888 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:20:48.686192  259888 start.go:353] cluster config:
	{Name:pause-203121 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-203121 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:fals
e storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:20:48.688406  259888 out.go:179] * Starting "pause-203121" primary control-plane node in "pause-203121" cluster
	I1210 06:20:48.689819  259888 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 06:20:48.691092  259888 out.go:179] * Pulling base image v0.0.48-1765319469-22089 ...
	I1210 06:20:48.692410  259888 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 06:20:48.692457  259888 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-8832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1210 06:20:48.692488  259888 cache.go:65] Caching tarball of preloaded images
	I1210 06:20:48.692548  259888 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon
	I1210 06:20:48.692596  259888 preload.go:238] Found /home/jenkins/minikube-integration/22089-8832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 06:20:48.692612  259888 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1210 06:20:48.692767  259888 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/pause-203121/config.json ...
	I1210 06:20:48.714642  259888 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon, skipping pull
	I1210 06:20:48.714667  259888 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca exists in daemon, skipping load
	I1210 06:20:48.714687  259888 cache.go:243] Successfully downloaded all kic artifacts
	I1210 06:20:48.714722  259888 start.go:360] acquireMachinesLock for pause-203121: {Name:mk1bee09c0ae4144013617ea77d79f9f746f3d96 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:20:48.714792  259888 start.go:364] duration metric: took 44.733µs to acquireMachinesLock for "pause-203121"
	I1210 06:20:48.714816  259888 start.go:96] Skipping create...Using existing machine configuration
	I1210 06:20:48.714825  259888 fix.go:54] fixHost starting: 
	I1210 06:20:48.715030  259888 cli_runner.go:164] Run: docker container inspect pause-203121 --format={{.State.Status}}
	I1210 06:20:48.733826  259888 fix.go:112] recreateIfNeeded on pause-203121: state=Running err=<nil>
	W1210 06:20:48.733858  259888 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 06:20:47.541232  249014 system_pods.go:86] 9 kube-system pods found
	I1210 06:20:47.541263  249014 system_pods.go:89] "calico-kube-controllers-5c676f698c-j8l7d" [24904c80-7f3b-4329-ac32-7b6f5b5ab3c2] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1210 06:20:47.541271  249014 system_pods.go:89] "calico-node-sndmq" [0a9fda50-386f-4548-b300-0f2b61dfb24a] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1210 06:20:47.541279  249014 system_pods.go:89] "coredns-66bc5c9577-9s7ld" [5d303937-d9b7-4946-b8ef-45e65b5f04f1] Running
	I1210 06:20:47.541283  249014 system_pods.go:89] "etcd-calico-201263" [a33c1442-9339-4b03-93ab-034382a4ab11] Running
	I1210 06:20:47.541287  249014 system_pods.go:89] "kube-apiserver-calico-201263" [d586d73a-8d14-4a43-b182-9abf1e7d659f] Running
	I1210 06:20:47.541291  249014 system_pods.go:89] "kube-controller-manager-calico-201263" [962a3c12-8790-4f9e-9ea3-aba14c4c5972] Running
	I1210 06:20:47.541295  249014 system_pods.go:89] "kube-proxy-7bwmh" [d08030b5-2aed-4336-b38b-e4213293ac7e] Running
	I1210 06:20:47.541300  249014 system_pods.go:89] "kube-scheduler-calico-201263" [fdcfeefb-e366-46c5-aed3-c883a3ced741] Running
	I1210 06:20:47.541305  249014 system_pods.go:89] "storage-provisioner" [817f9fad-fc46-4b94-96b7-dda29c164525] Running
	I1210 06:20:47.541318  249014 system_pods.go:126] duration metric: took 14.158550887s to wait for k8s-apps to be running ...
	I1210 06:20:47.541330  249014 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 06:20:47.541383  249014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:20:47.556623  249014 system_svc.go:56] duration metric: took 15.285294ms WaitForService to wait for kubelet
	I1210 06:20:47.556653  249014 kubeadm.go:587] duration metric: took 19.086544011s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 06:20:47.556673  249014 node_conditions.go:102] verifying NodePressure condition ...
	I1210 06:20:47.560018  249014 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1210 06:20:47.560069  249014 node_conditions.go:123] node cpu capacity is 8
	I1210 06:20:47.560084  249014 node_conditions.go:105] duration metric: took 3.405879ms to run NodePressure ...
	I1210 06:20:47.560099  249014 start.go:242] waiting for startup goroutines ...
	I1210 06:20:47.560112  249014 start.go:247] waiting for cluster config update ...
	I1210 06:20:47.560126  249014 start.go:256] writing updated cluster config ...
	I1210 06:20:47.560493  249014 ssh_runner.go:195] Run: rm -f paused
	I1210 06:20:47.564976  249014 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 06:20:47.568729  249014 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-9s7ld" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:20:47.573242  249014 pod_ready.go:94] pod "coredns-66bc5c9577-9s7ld" is "Ready"
	I1210 06:20:47.573264  249014 pod_ready.go:86] duration metric: took 4.509237ms for pod "coredns-66bc5c9577-9s7ld" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:20:47.575262  249014 pod_ready.go:83] waiting for pod "etcd-calico-201263" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:20:47.579508  249014 pod_ready.go:94] pod "etcd-calico-201263" is "Ready"
	I1210 06:20:47.579531  249014 pod_ready.go:86] duration metric: took 4.249744ms for pod "etcd-calico-201263" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:20:47.581696  249014 pod_ready.go:83] waiting for pod "kube-apiserver-calico-201263" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:20:47.585829  249014 pod_ready.go:94] pod "kube-apiserver-calico-201263" is "Ready"
	I1210 06:20:47.585854  249014 pod_ready.go:86] duration metric: took 4.136871ms for pod "kube-apiserver-calico-201263" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:20:47.587776  249014 pod_ready.go:83] waiting for pod "kube-controller-manager-calico-201263" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:20:47.969151  249014 pod_ready.go:94] pod "kube-controller-manager-calico-201263" is "Ready"
	I1210 06:20:47.969175  249014 pod_ready.go:86] duration metric: took 381.378433ms for pod "kube-controller-manager-calico-201263" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:20:48.169295  249014 pod_ready.go:83] waiting for pod "kube-proxy-7bwmh" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:20:48.569807  249014 pod_ready.go:94] pod "kube-proxy-7bwmh" is "Ready"
	I1210 06:20:48.569842  249014 pod_ready.go:86] duration metric: took 400.515607ms for pod "kube-proxy-7bwmh" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:20:48.769914  249014 pod_ready.go:83] waiting for pod "kube-scheduler-calico-201263" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:20:49.169605  249014 pod_ready.go:94] pod "kube-scheduler-calico-201263" is "Ready"
	I1210 06:20:49.169630  249014 pod_ready.go:86] duration metric: took 399.519093ms for pod "kube-scheduler-calico-201263" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:20:49.169642  249014 pod_ready.go:40] duration metric: took 1.604635835s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 06:20:49.215058  249014 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1210 06:20:49.217437  249014 out.go:179] * Done! kubectl is now configured to use "calico-201263" cluster and "default" namespace by default
	I1210 06:20:48.736763  259888 out.go:252] * Updating the running docker "pause-203121" container ...
	I1210 06:20:48.736808  259888 machine.go:94] provisionDockerMachine start ...
	I1210 06:20:48.736909  259888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-203121
	I1210 06:20:48.760774  259888 main.go:143] libmachine: Using SSH client type: native
	I1210 06:20:48.761192  259888 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33054 <nil> <nil>}
	I1210 06:20:48.761218  259888 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 06:20:48.896775  259888 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-203121
	
	I1210 06:20:48.896805  259888 ubuntu.go:182] provisioning hostname "pause-203121"
	I1210 06:20:48.896869  259888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-203121
	I1210 06:20:48.917591  259888 main.go:143] libmachine: Using SSH client type: native
	I1210 06:20:48.917846  259888 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33054 <nil> <nil>}
	I1210 06:20:48.917864  259888 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-203121 && echo "pause-203121" | sudo tee /etc/hostname
	I1210 06:20:49.061986  259888 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-203121
	
	I1210 06:20:49.062044  259888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-203121
	I1210 06:20:49.080968  259888 main.go:143] libmachine: Using SSH client type: native
	I1210 06:20:49.081226  259888 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33054 <nil> <nil>}
	I1210 06:20:49.081252  259888 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-203121' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-203121/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-203121' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 06:20:49.213699  259888 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:20:49.213727  259888 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22089-8832/.minikube CaCertPath:/home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22089-8832/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22089-8832/.minikube}
	I1210 06:20:49.213751  259888 ubuntu.go:190] setting up certificates
	I1210 06:20:49.213765  259888 provision.go:84] configureAuth start
	I1210 06:20:49.213847  259888 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-203121
	I1210 06:20:49.233563  259888 provision.go:143] copyHostCerts
	I1210 06:20:49.233639  259888 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-8832/.minikube/key.pem, removing ...
	I1210 06:20:49.233657  259888 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-8832/.minikube/key.pem
	I1210 06:20:49.233752  259888 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22089-8832/.minikube/key.pem (1675 bytes)
	I1210 06:20:49.234279  259888 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-8832/.minikube/ca.pem, removing ...
	I1210 06:20:49.234295  259888 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-8832/.minikube/ca.pem
	I1210 06:20:49.234351  259888 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22089-8832/.minikube/ca.pem (1078 bytes)
	I1210 06:20:49.234503  259888 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-8832/.minikube/cert.pem, removing ...
	I1210 06:20:49.234511  259888 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-8832/.minikube/cert.pem
	I1210 06:20:49.234551  259888 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22089-8832/.minikube/cert.pem (1123 bytes)
	I1210 06:20:49.234655  259888 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22089-8832/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca-key.pem org=jenkins.pause-203121 san=[127.0.0.1 192.168.103.2 localhost minikube pause-203121]
	I1210 06:20:49.399505  259888 provision.go:177] copyRemoteCerts
	I1210 06:20:49.399570  259888 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 06:20:49.399602  259888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-203121
	I1210 06:20:49.419207  259888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33054 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/pause-203121/id_rsa Username:docker}
	I1210 06:20:49.520485  259888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 06:20:49.540996  259888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 06:20:49.560718  259888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1210 06:20:49.580257  259888 provision.go:87] duration metric: took 366.466467ms to configureAuth
	I1210 06:20:49.580290  259888 ubuntu.go:206] setting minikube options for container-runtime
	I1210 06:20:49.580524  259888 config.go:182] Loaded profile config "pause-203121": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 06:20:49.580630  259888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-203121
	I1210 06:20:49.600382  259888 main.go:143] libmachine: Using SSH client type: native
	I1210 06:20:49.600638  259888 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33054 <nil> <nil>}
	I1210 06:20:49.600656  259888 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 06:20:50.314133  259888 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 06:20:50.314156  259888 machine.go:97] duration metric: took 1.577339765s to provisionDockerMachine
	I1210 06:20:50.314166  259888 start.go:293] postStartSetup for "pause-203121" (driver="docker")
	I1210 06:20:50.314176  259888 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 06:20:50.314241  259888 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 06:20:50.314285  259888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-203121
	I1210 06:20:50.334582  259888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33054 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/pause-203121/id_rsa Username:docker}
	I1210 06:20:50.431556  259888 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 06:20:50.435575  259888 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 06:20:50.435602  259888 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 06:20:50.435613  259888 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-8832/.minikube/addons for local assets ...
	I1210 06:20:50.435670  259888 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-8832/.minikube/files for local assets ...
	I1210 06:20:50.435756  259888 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-8832/.minikube/files/etc/ssl/certs/123742.pem -> 123742.pem in /etc/ssl/certs
	I1210 06:20:50.435893  259888 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 06:20:50.444542  259888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/files/etc/ssl/certs/123742.pem --> /etc/ssl/certs/123742.pem (1708 bytes)
	I1210 06:20:50.464558  259888 start.go:296] duration metric: took 150.378766ms for postStartSetup
	I1210 06:20:50.464649  259888 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:20:50.464695  259888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-203121
	I1210 06:20:50.485148  259888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33054 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/pause-203121/id_rsa Username:docker}
	I1210 06:20:50.581599  259888 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 06:20:50.587391  259888 fix.go:56] duration metric: took 1.872551197s for fixHost
	I1210 06:20:50.587420  259888 start.go:83] releasing machines lock for "pause-203121", held for 1.872615096s
	I1210 06:20:50.587502  259888 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-203121
	I1210 06:20:50.606810  259888 ssh_runner.go:195] Run: cat /version.json
	I1210 06:20:50.606858  259888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-203121
	I1210 06:20:50.606923  259888 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 06:20:50.606993  259888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-203121
	I1210 06:20:50.626907  259888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33054 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/pause-203121/id_rsa Username:docker}
	I1210 06:20:50.627554  259888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33054 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/pause-203121/id_rsa Username:docker}
	I1210 06:20:50.719108  259888 ssh_runner.go:195] Run: systemctl --version
	I1210 06:20:50.786355  259888 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 06:20:50.825747  259888 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 06:20:50.830941  259888 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 06:20:50.830997  259888 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 06:20:50.839910  259888 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 06:20:50.839938  259888 start.go:496] detecting cgroup driver to use...
	I1210 06:20:50.839981  259888 detect.go:190] detected "systemd" cgroup driver on host os
	I1210 06:20:50.840046  259888 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 06:20:50.855954  259888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 06:20:50.869260  259888 docker.go:218] disabling cri-docker service (if available) ...
	I1210 06:20:50.869333  259888 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 06:20:50.885349  259888 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 06:20:50.900195  259888 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 06:20:51.022729  259888 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 06:20:51.135014  259888 docker.go:234] disabling docker service ...
	I1210 06:20:51.135075  259888 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 06:20:51.150508  259888 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 06:20:51.163515  259888 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 06:20:51.277521  259888 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 06:20:51.393911  259888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 06:20:51.407181  259888 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:20:51.422181  259888 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 06:20:51.422270  259888 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:20:51.431988  259888 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1210 06:20:51.432038  259888 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:20:51.441880  259888 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:20:51.452239  259888 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:20:51.462072  259888 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 06:20:51.472098  259888 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:20:51.482886  259888 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:20:51.493228  259888 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:20:51.504329  259888 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 06:20:51.513022  259888 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 06:20:51.522030  259888 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:20:51.638728  259888 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 06:20:51.864149  259888 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 06:20:51.864259  259888 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 06:20:51.868690  259888 start.go:564] Will wait 60s for crictl version
	I1210 06:20:51.868753  259888 ssh_runner.go:195] Run: which crictl
	I1210 06:20:51.873406  259888 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 06:20:51.903298  259888 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 06:20:51.903409  259888 ssh_runner.go:195] Run: crio --version
	I1210 06:20:51.934944  259888 ssh_runner.go:195] Run: crio --version
	I1210 06:20:51.969771  259888 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1210 06:20:47.967004  252480 system_pods.go:86] 7 kube-system pods found
	I1210 06:20:47.967034  252480 system_pods.go:89] "coredns-66bc5c9577-r7p5t" [97853ca3-8982-4324-a9f2-005209f7a2dd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 06:20:47.967040  252480 system_pods.go:89] "etcd-custom-flannel-201263" [a7895f5c-56b2-4262-b678-eab987e9faa4] Running
	I1210 06:20:47.967047  252480 system_pods.go:89] "kube-apiserver-custom-flannel-201263" [99edbcf6-7869-4984-b2b8-047a3bd5b219] Running
	I1210 06:20:47.967052  252480 system_pods.go:89] "kube-controller-manager-custom-flannel-201263" [efe68542-32c5-454b-aafb-f638325cac55] Running
	I1210 06:20:47.967056  252480 system_pods.go:89] "kube-proxy-lmwlf" [f64f811e-2c3a-4ace-bf39-4dfae0bf9e48] Running
	I1210 06:20:47.967062  252480 system_pods.go:89] "kube-scheduler-custom-flannel-201263" [f2835dd2-8d60-48e3-b5d0-bd3908ce76db] Running
	I1210 06:20:47.967067  252480 system_pods.go:89] "storage-provisioner" [f53483dd-3c55-479a-b395-e6caefa2136d] Running
	I1210 06:20:47.967083  252480 retry.go:31] will retry after 2.081778836s: missing components: kube-dns
	I1210 06:20:50.053527  252480 system_pods.go:86] 7 kube-system pods found
	I1210 06:20:50.053565  252480 system_pods.go:89] "coredns-66bc5c9577-r7p5t" [97853ca3-8982-4324-a9f2-005209f7a2dd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 06:20:50.053572  252480 system_pods.go:89] "etcd-custom-flannel-201263" [a7895f5c-56b2-4262-b678-eab987e9faa4] Running
	I1210 06:20:50.053582  252480 system_pods.go:89] "kube-apiserver-custom-flannel-201263" [99edbcf6-7869-4984-b2b8-047a3bd5b219] Running
	I1210 06:20:50.053588  252480 system_pods.go:89] "kube-controller-manager-custom-flannel-201263" [efe68542-32c5-454b-aafb-f638325cac55] Running
	I1210 06:20:50.053594  252480 system_pods.go:89] "kube-proxy-lmwlf" [f64f811e-2c3a-4ace-bf39-4dfae0bf9e48] Running
	I1210 06:20:50.053600  252480 system_pods.go:89] "kube-scheduler-custom-flannel-201263" [f2835dd2-8d60-48e3-b5d0-bd3908ce76db] Running
	I1210 06:20:50.053604  252480 system_pods.go:89] "storage-provisioner" [f53483dd-3c55-479a-b395-e6caefa2136d] Running
	I1210 06:20:50.053622  252480 retry.go:31] will retry after 3.048460615s: missing components: kube-dns
	I1210 06:20:51.971269  259888 cli_runner.go:164] Run: docker network inspect pause-203121 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:20:51.994209  259888 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1210 06:20:51.998892  259888 kubeadm.go:884] updating cluster {Name:pause-203121 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-203121 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regis
try-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 06:20:51.999067  259888 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 06:20:51.999126  259888 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:20:52.040183  259888 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 06:20:52.040206  259888 crio.go:433] Images already preloaded, skipping extraction
	I1210 06:20:52.040342  259888 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:20:52.071028  259888 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 06:20:52.071092  259888 cache_images.go:86] Images are preloaded, skipping loading
	I1210 06:20:52.071101  259888 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.2 crio true true} ...
	I1210 06:20:52.071234  259888 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-203121 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-203121 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 06:20:52.071319  259888 ssh_runner.go:195] Run: crio config
	I1210 06:20:52.125645  259888 cni.go:84] Creating CNI manager for ""
	I1210 06:20:52.125672  259888 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:20:52.125690  259888 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 06:20:52.125728  259888 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-203121 NodeName:pause-203121 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 06:20:52.125881  259888 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-203121"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 06:20:52.125955  259888 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1210 06:20:52.134938  259888 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 06:20:52.135008  259888 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 06:20:52.143288  259888 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1210 06:20:52.156909  259888 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 06:20:52.170031  259888 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1210 06:20:52.183189  259888 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1210 06:20:52.187335  259888 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:20:52.303320  259888 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:20:52.317369  259888 certs.go:69] Setting up /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/pause-203121 for IP: 192.168.103.2
	I1210 06:20:52.317392  259888 certs.go:195] generating shared ca certs ...
	I1210 06:20:52.317411  259888 certs.go:227] acquiring lock for ca certs: {Name:mkfe434cecfa5233603e8d01fb39a21abb4f8ca4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:20:52.317579  259888 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22089-8832/.minikube/ca.key
	I1210 06:20:52.317621  259888 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22089-8832/.minikube/proxy-client-ca.key
	I1210 06:20:52.317631  259888 certs.go:257] generating profile certs ...
	I1210 06:20:52.317711  259888 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/pause-203121/client.key
	I1210 06:20:52.317768  259888 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/pause-203121/apiserver.key.d41d005a
	I1210 06:20:52.317806  259888 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/pause-203121/proxy-client.key
	I1210 06:20:52.317913  259888 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/12374.pem (1338 bytes)
	W1210 06:20:52.317954  259888 certs.go:480] ignoring /home/jenkins/minikube-integration/22089-8832/.minikube/certs/12374_empty.pem, impossibly tiny 0 bytes
	I1210 06:20:52.317963  259888 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca-key.pem (1679 bytes)
	I1210 06:20:52.317991  259888 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem (1078 bytes)
	I1210 06:20:52.318017  259888 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/cert.pem (1123 bytes)
	I1210 06:20:52.318040  259888 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/key.pem (1675 bytes)
	I1210 06:20:52.318079  259888 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/files/etc/ssl/certs/123742.pem (1708 bytes)
	I1210 06:20:52.318851  259888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 06:20:52.338548  259888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 06:20:52.357560  259888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 06:20:52.378666  259888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 06:20:52.398841  259888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/pause-203121/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1210 06:20:52.417441  259888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/pause-203121/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 06:20:52.436599  259888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/pause-203121/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 06:20:52.457021  259888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/pause-203121/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 06:20:52.479290  259888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 06:20:52.502987  259888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/certs/12374.pem --> /usr/share/ca-certificates/12374.pem (1338 bytes)
	I1210 06:20:52.524016  259888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/files/etc/ssl/certs/123742.pem --> /usr/share/ca-certificates/123742.pem (1708 bytes)
	I1210 06:20:52.546933  259888 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 06:20:52.562607  259888 ssh_runner.go:195] Run: openssl version
	I1210 06:20:52.570738  259888 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12374.pem
	I1210 06:20:52.579887  259888 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12374.pem /etc/ssl/certs/12374.pem
	I1210 06:20:52.588288  259888 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12374.pem
	I1210 06:20:52.592503  259888 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:52 /usr/share/ca-certificates/12374.pem
	I1210 06:20:52.592561  259888 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12374.pem
	I1210 06:20:52.629367  259888 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 06:20:52.637576  259888 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/123742.pem
	I1210 06:20:52.646369  259888 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/123742.pem /etc/ssl/certs/123742.pem
	I1210 06:20:52.655793  259888 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/123742.pem
	I1210 06:20:52.659987  259888 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:52 /usr/share/ca-certificates/123742.pem
	I1210 06:20:52.660052  259888 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/123742.pem
	I1210 06:20:52.699093  259888 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 06:20:52.707497  259888 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:20:52.715697  259888 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 06:20:52.723733  259888 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:20:52.727775  259888 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:20:52.727838  259888 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:20:52.764439  259888 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 06:20:52.773129  259888 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:20:52.777191  259888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 06:20:52.814779  259888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 06:20:52.849520  259888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 06:20:52.886341  259888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 06:20:52.923917  259888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 06:20:52.960431  259888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 06:20:52.999023  259888 kubeadm.go:401] StartCluster: {Name:pause-203121 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-203121 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry
-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:20:52.999159  259888 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 06:20:52.999221  259888 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:20:53.029794  259888 cri.go:89] found id: "cf62f9e28d4439c6626f971c222c28ef61e7c99dca09cee86fc50eb02f1f11e7"
	I1210 06:20:53.029815  259888 cri.go:89] found id: "e00866f864193cb02d7fa4e6e4fdbc6ad01fdffb3408406ad2b0a2f2ca7546ab"
	I1210 06:20:53.029822  259888 cri.go:89] found id: "4324a96acbf26610fa24d25a6b10deeebb9cddb7fb94f5dfde55488050951f4c"
	I1210 06:20:53.029827  259888 cri.go:89] found id: "b0ef753ac71a3588946b11e2247d60114c2ada8b6472fa9fe506e1f8d9b2576a"
	I1210 06:20:53.029832  259888 cri.go:89] found id: "6f2d0d213957beac3c690eeacb3151c1192c461d8284e6a53b4cfecdd4a17add"
	I1210 06:20:53.029836  259888 cri.go:89] found id: "f1f8b92df9fd1da6da75299621207a74d1d2035f97ce2dd8c961fcf715a4e7ec"
	I1210 06:20:53.029840  259888 cri.go:89] found id: "9fed510c4454cb11f751b00c6dc02a48e1bb122a804caf714f4cbeae72fd6a05"
	I1210 06:20:53.029844  259888 cri.go:89] found id: ""
	I1210 06:20:53.029892  259888 ssh_runner.go:195] Run: sudo runc list -f json
	W1210 06:20:53.042327  259888 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:20:53Z" level=error msg="open /run/runc: no such file or directory"
	I1210 06:20:53.042398  259888 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 06:20:53.051326  259888 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 06:20:53.051344  259888 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 06:20:53.051392  259888 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 06:20:53.059639  259888 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:20:53.060448  259888 kubeconfig.go:125] found "pause-203121" server: "https://192.168.103.2:8443"
	I1210 06:20:53.061648  259888 kapi.go:59] client config for pause-203121: &rest.Config{Host:"https://192.168.103.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22089-8832/.minikube/profiles/pause-203121/client.crt", KeyFile:"/home/jenkins/minikube-integration/22089-8832/.minikube/profiles/pause-203121/client.key", CAFile:"/home/jenkins/minikube-integration/22089-8832/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string
(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 06:20:53.062062  259888 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1210 06:20:53.062079  259888 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1210 06:20:53.062084  259888 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1210 06:20:53.062088  259888 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1210 06:20:53.062092  259888 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1210 06:20:53.062400  259888 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 06:20:53.070708  259888 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1210 06:20:53.070744  259888 kubeadm.go:602] duration metric: took 19.393056ms to restartPrimaryControlPlane
	I1210 06:20:53.070754  259888 kubeadm.go:403] duration metric: took 71.740024ms to StartCluster
	I1210 06:20:53.070771  259888 settings.go:142] acquiring lock: {Name:mkcfa52e2e09cf8266d26c2d1d1f162454a79515 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:20:53.070832  259888 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22089-8832/kubeconfig
	I1210 06:20:53.072032  259888 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/kubeconfig: {Name:mk2d0febd8c6a30a71f02d20e2057fd6d147cd76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:20:53.072312  259888 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 06:20:53.072403  259888 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 06:20:53.072561  259888 config.go:182] Loaded profile config "pause-203121": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 06:20:53.074913  259888 out.go:179] * Verifying Kubernetes components...
	I1210 06:20:53.074915  259888 out.go:179] * Enabled addons: 
	I1210 06:20:53.076715  259888 addons.go:530] duration metric: took 4.317666ms for enable addons: enabled=[]
	I1210 06:20:53.076751  259888 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:20:53.188259  259888 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:20:53.204215  259888 node_ready.go:35] waiting up to 6m0s for node "pause-203121" to be "Ready" ...
	I1210 06:20:53.212415  259888 node_ready.go:49] node "pause-203121" is "Ready"
	I1210 06:20:53.212447  259888 node_ready.go:38] duration metric: took 8.194815ms for node "pause-203121" to be "Ready" ...
	I1210 06:20:53.212462  259888 api_server.go:52] waiting for apiserver process to appear ...
	I1210 06:20:53.212546  259888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:20:53.225078  259888 api_server.go:72] duration metric: took 152.723936ms to wait for apiserver process to appear ...
	I1210 06:20:53.225111  259888 api_server.go:88] waiting for apiserver healthz status ...
	I1210 06:20:53.225134  259888 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1210 06:20:53.230448  259888 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1210 06:20:53.231499  259888 api_server.go:141] control plane version: v1.34.2
	I1210 06:20:53.231527  259888 api_server.go:131] duration metric: took 6.409801ms to wait for apiserver health ...
	I1210 06:20:53.231536  259888 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 06:20:53.235202  259888 system_pods.go:59] 7 kube-system pods found
	I1210 06:20:53.235255  259888 system_pods.go:61] "coredns-66bc5c9577-j8lrj" [3d75b8ab-fa07-448d-a04e-b1ebb0d07bff] Running
	I1210 06:20:53.235263  259888 system_pods.go:61] "etcd-pause-203121" [a3c22062-c777-48d2-b5d1-8d79812b722d] Running
	I1210 06:20:53.235267  259888 system_pods.go:61] "kindnet-qn46q" [f2260206-9397-4c0b-9d7d-5c59c7fde610] Running
	I1210 06:20:53.235272  259888 system_pods.go:61] "kube-apiserver-pause-203121" [d2215bf1-06dc-42bf-a4d7-ba5c7a3de06f] Running
	I1210 06:20:53.235279  259888 system_pods.go:61] "kube-controller-manager-pause-203121" [49680915-8036-4a3d-a23e-96ecf9cf91c1] Running
	I1210 06:20:53.235285  259888 system_pods.go:61] "kube-proxy-jqpjb" [3a5b610d-98d6-498d-84f7-e3edeaad1acf] Running
	I1210 06:20:53.235291  259888 system_pods.go:61] "kube-scheduler-pause-203121" [aa977e11-890e-4448-8841-294d7fcc64f1] Running
	I1210 06:20:53.235299  259888 system_pods.go:74] duration metric: took 3.75695ms to wait for pod list to return data ...
	I1210 06:20:53.235326  259888 default_sa.go:34] waiting for default service account to be created ...
	I1210 06:20:53.237582  259888 default_sa.go:45] found service account: "default"
	I1210 06:20:53.237609  259888 default_sa.go:55] duration metric: took 2.276678ms for default service account to be created ...
	I1210 06:20:53.237620  259888 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 06:20:53.240531  259888 system_pods.go:86] 7 kube-system pods found
	I1210 06:20:53.240560  259888 system_pods.go:89] "coredns-66bc5c9577-j8lrj" [3d75b8ab-fa07-448d-a04e-b1ebb0d07bff] Running
	I1210 06:20:53.240568  259888 system_pods.go:89] "etcd-pause-203121" [a3c22062-c777-48d2-b5d1-8d79812b722d] Running
	I1210 06:20:53.240575  259888 system_pods.go:89] "kindnet-qn46q" [f2260206-9397-4c0b-9d7d-5c59c7fde610] Running
	I1210 06:20:53.240581  259888 system_pods.go:89] "kube-apiserver-pause-203121" [d2215bf1-06dc-42bf-a4d7-ba5c7a3de06f] Running
	I1210 06:20:53.240588  259888 system_pods.go:89] "kube-controller-manager-pause-203121" [49680915-8036-4a3d-a23e-96ecf9cf91c1] Running
	I1210 06:20:53.240594  259888 system_pods.go:89] "kube-proxy-jqpjb" [3a5b610d-98d6-498d-84f7-e3edeaad1acf] Running
	I1210 06:20:53.240603  259888 system_pods.go:89] "kube-scheduler-pause-203121" [aa977e11-890e-4448-8841-294d7fcc64f1] Running
	I1210 06:20:53.240611  259888 system_pods.go:126] duration metric: took 2.984605ms to wait for k8s-apps to be running ...
	I1210 06:20:53.240620  259888 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 06:20:53.240672  259888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:20:53.254682  259888 system_svc.go:56] duration metric: took 14.054968ms WaitForService to wait for kubelet
	I1210 06:20:53.254706  259888 kubeadm.go:587] duration metric: took 182.359492ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 06:20:53.254723  259888 node_conditions.go:102] verifying NodePressure condition ...
	I1210 06:20:53.257599  259888 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1210 06:20:53.257639  259888 node_conditions.go:123] node cpu capacity is 8
	I1210 06:20:53.257655  259888 node_conditions.go:105] duration metric: took 2.927427ms to run NodePressure ...
	I1210 06:20:53.257666  259888 start.go:242] waiting for startup goroutines ...
	I1210 06:20:53.257673  259888 start.go:247] waiting for cluster config update ...
	I1210 06:20:53.257680  259888 start.go:256] writing updated cluster config ...
	I1210 06:20:53.257944  259888 ssh_runner.go:195] Run: rm -f paused
	I1210 06:20:53.262455  259888 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 06:20:53.263482  259888 kapi.go:59] client config for pause-203121: &rest.Config{Host:"https://192.168.103.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22089-8832/.minikube/profiles/pause-203121/client.crt", KeyFile:"/home/jenkins/minikube-integration/22089-8832/.minikube/profiles/pause-203121/client.key", CAFile:"/home/jenkins/minikube-integration/22089-8832/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string
(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 06:20:53.266453  259888 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-j8lrj" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:20:53.271145  259888 pod_ready.go:94] pod "coredns-66bc5c9577-j8lrj" is "Ready"
	I1210 06:20:53.271169  259888 pod_ready.go:86] duration metric: took 4.681171ms for pod "coredns-66bc5c9577-j8lrj" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:20:53.273432  259888 pod_ready.go:83] waiting for pod "etcd-pause-203121" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:20:53.277316  259888 pod_ready.go:94] pod "etcd-pause-203121" is "Ready"
	I1210 06:20:53.277343  259888 pod_ready.go:86] duration metric: took 3.885502ms for pod "etcd-pause-203121" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:20:53.279481  259888 pod_ready.go:83] waiting for pod "kube-apiserver-pause-203121" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:20:53.283412  259888 pod_ready.go:94] pod "kube-apiserver-pause-203121" is "Ready"
	I1210 06:20:53.283431  259888 pod_ready.go:86] duration metric: took 3.930718ms for pod "kube-apiserver-pause-203121" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:20:53.285327  259888 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-203121" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:20:53.667221  259888 pod_ready.go:94] pod "kube-controller-manager-pause-203121" is "Ready"
	I1210 06:20:53.667250  259888 pod_ready.go:86] duration metric: took 381.902327ms for pod "kube-controller-manager-pause-203121" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:20:53.867481  259888 pod_ready.go:83] waiting for pod "kube-proxy-jqpjb" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:20:54.266939  259888 pod_ready.go:94] pod "kube-proxy-jqpjb" is "Ready"
	I1210 06:20:54.266972  259888 pod_ready.go:86] duration metric: took 399.461326ms for pod "kube-proxy-jqpjb" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:20:54.467079  259888 pod_ready.go:83] waiting for pod "kube-scheduler-pause-203121" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:20:54.867410  259888 pod_ready.go:94] pod "kube-scheduler-pause-203121" is "Ready"
	I1210 06:20:54.867439  259888 pod_ready.go:86] duration metric: took 400.30366ms for pod "kube-scheduler-pause-203121" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:20:54.867453  259888 pod_ready.go:40] duration metric: took 1.604954095s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 06:20:54.915267  259888 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1210 06:20:54.917540  259888 out.go:179] * Done! kubectl is now configured to use "pause-203121" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 10 06:20:51 pause-203121 crio[2184]: time="2025-12-10T06:20:51.750917463Z" level=info msg="RDT not available in the host system"
	Dec 10 06:20:51 pause-203121 crio[2184]: time="2025-12-10T06:20:51.750932771Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 10 06:20:51 pause-203121 crio[2184]: time="2025-12-10T06:20:51.752135867Z" level=info msg="Conmon does support the --sync option"
	Dec 10 06:20:51 pause-203121 crio[2184]: time="2025-12-10T06:20:51.752163832Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 10 06:20:51 pause-203121 crio[2184]: time="2025-12-10T06:20:51.752185564Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 10 06:20:51 pause-203121 crio[2184]: time="2025-12-10T06:20:51.753261482Z" level=info msg="Conmon does support the --sync option"
	Dec 10 06:20:51 pause-203121 crio[2184]: time="2025-12-10T06:20:51.753282166Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 10 06:20:51 pause-203121 crio[2184]: time="2025-12-10T06:20:51.762368412Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 10 06:20:51 pause-203121 crio[2184]: time="2025-12-10T06:20:51.762405853Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 10 06:20:51 pause-203121 crio[2184]: time="2025-12-10T06:20:51.763174331Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \
"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nr
i]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 10 06:20:51 pause-203121 crio[2184]: time="2025-12-10T06:20:51.763721293Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 10 06:20:51 pause-203121 crio[2184]: time="2025-12-10T06:20:51.76379272Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 10 06:20:51 pause-203121 crio[2184]: time="2025-12-10T06:20:51.858691835Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-j8lrj Namespace:kube-system ID:ad79311d23062295b40365d723906fd145ff91e84249da8cdc377ac2af9dc420 UID:3d75b8ab-fa07-448d-a04e-b1ebb0d07bff NetNS:/var/run/netns/d09a97d5-d346-4fc7-ac09-eb22aa04e1a0 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008a9a0}] Aliases:map[]}"
	Dec 10 06:20:51 pause-203121 crio[2184]: time="2025-12-10T06:20:51.858908203Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-j8lrj for CNI network kindnet (type=ptp)"
	Dec 10 06:20:51 pause-203121 crio[2184]: time="2025-12-10T06:20:51.859420825Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 10 06:20:51 pause-203121 crio[2184]: time="2025-12-10T06:20:51.859451548Z" level=info msg="Starting seccomp notifier watcher"
	Dec 10 06:20:51 pause-203121 crio[2184]: time="2025-12-10T06:20:51.85957935Z" level=info msg="Create NRI interface"
	Dec 10 06:20:51 pause-203121 crio[2184]: time="2025-12-10T06:20:51.859680386Z" level=info msg="built-in NRI default validator is disabled"
	Dec 10 06:20:51 pause-203121 crio[2184]: time="2025-12-10T06:20:51.859696474Z" level=info msg="runtime interface created"
	Dec 10 06:20:51 pause-203121 crio[2184]: time="2025-12-10T06:20:51.859708308Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 10 06:20:51 pause-203121 crio[2184]: time="2025-12-10T06:20:51.859713356Z" level=info msg="runtime interface starting up..."
	Dec 10 06:20:51 pause-203121 crio[2184]: time="2025-12-10T06:20:51.859718506Z" level=info msg="starting plugins..."
	Dec 10 06:20:51 pause-203121 crio[2184]: time="2025-12-10T06:20:51.859729424Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 10 06:20:51 pause-203121 crio[2184]: time="2025-12-10T06:20:51.860088803Z" level=info msg="No systemd watchdog enabled"
	Dec 10 06:20:51 pause-203121 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	cf62f9e28d443       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   11 seconds ago       Running             coredns                   0                   ad79311d23062       coredns-66bc5c9577-j8lrj               kube-system
	e00866f864193       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   53 seconds ago       Running             kube-proxy                0                   f37cdf94b850e       kube-proxy-jqpjb                       kube-system
	4324a96acbf26       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   53 seconds ago       Running             kindnet-cni               0                   f85bf0ad78155       kindnet-qn46q                          kube-system
	b0ef753ac71a3       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   About a minute ago   Running             kube-apiserver            0                   62317fbf6774f       kube-apiserver-pause-203121            kube-system
	6f2d0d213957b       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   About a minute ago   Running             etcd                      0                   37752d2cfa69a       etcd-pause-203121                      kube-system
	f1f8b92df9fd1       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   About a minute ago   Running             kube-scheduler            0                   25b3b651ebe33       kube-scheduler-pause-203121            kube-system
	9fed510c4454c       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   About a minute ago   Running             kube-controller-manager   0                   757ea1d5f8a73       kube-controller-manager-pause-203121   kube-system
	
	
	==> coredns [cf62f9e28d4439c6626f971c222c28ef61e7c99dca09cee86fc50eb02f1f11e7] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33600 - 25107 "HINFO IN 1814052844277688122.7575454376246788054. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.437869762s
	
	
	==> describe nodes <==
	Name:               pause-203121
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-203121
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=edc6abd3c0573b88c7a02dc35aa0b985627fa3e9
	                    minikube.k8s.io/name=pause-203121
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T06_20_00_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 06:19:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-203121
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 06:20:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 06:20:49 +0000   Wed, 10 Dec 2025 06:19:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 06:20:49 +0000   Wed, 10 Dec 2025 06:19:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 06:20:49 +0000   Wed, 10 Dec 2025 06:19:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Dec 2025 06:20:49 +0000   Wed, 10 Dec 2025 06:20:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    pause-203121
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 0992b7e47f4f804d2f02c3066938a460
	  System UUID:                4c472031-a92e-4218-91a0-a496dc16bf08
	  Boot ID:                    cce7104c-1270-4b6b-af66-b04ce0de633c
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://Unknown
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-j8lrj                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     54s
	  kube-system                 etcd-pause-203121                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         59s
	  kube-system                 kindnet-qn46q                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      54s
	  kube-system                 kube-apiserver-pause-203121             250m (3%)     0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 kube-controller-manager-pause-203121    200m (2%)     0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 kube-proxy-jqpjb                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kube-system                 kube-scheduler-pause-203121             100m (1%)     0 (0%)      0 (0%)           0 (0%)         59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 52s   kube-proxy       
	  Normal  Starting                 59s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  59s   kubelet          Node pause-203121 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s   kubelet          Node pause-203121 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s   kubelet          Node pause-203121 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           55s   node-controller  Node pause-203121 event: Registered Node pause-203121 in Controller
	  Normal  NodeReady                13s   kubelet          Node pause-203121 status is now: NodeReady
	
	
	==> dmesg <==
	[  +4.744944] kauditd_printk_skb: 47 callbacks suppressed
	[Dec10 05:46] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 7a e7 3d 79 07 a4 2a ee 7c da f3 33 08 00
	[  +1.032224] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 7a e7 3d 79 07 a4 2a ee 7c da f3 33 08 00
	[  +1.023853] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 7a e7 3d 79 07 a4 2a ee 7c da f3 33 08 00
	[  +1.023939] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 7a e7 3d 79 07 a4 2a ee 7c da f3 33 08 00
	[  +1.023886] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000028] ll header: 00000000: 7a e7 3d 79 07 a4 2a ee 7c da f3 33 08 00
	[  +1.023872] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 7a e7 3d 79 07 a4 2a ee 7c da f3 33 08 00
	[  +2.047757] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 7a e7 3d 79 07 a4 2a ee 7c da f3 33 08 00
	[  +4.031567] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 7a e7 3d 79 07 a4 2a ee 7c da f3 33 08 00
	[  +8.191127] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 7a e7 3d 79 07 a4 2a ee 7c da f3 33 08 00
	[ +16.382234] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 7a e7 3d 79 07 a4 2a ee 7c da f3 33 08 00
	[Dec10 05:47] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 7a e7 3d 79 07 a4 2a ee 7c da f3 33 08 00
	[Dec10 06:20] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 67 21 4d 5d 12 08 06
	
	
	==> etcd [6f2d0d213957beac3c690eeacb3151c1192c461d8284e6a53b4cfecdd4a17add] <==
	{"level":"warn","ts":"2025-12-10T06:19:55.559529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:19:55.570879Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:19:55.583403Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:19:55.598949Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:19:55.607356Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:19:55.625203Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:19:55.630222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:19:55.639382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:19:55.650464Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:19:55.724396Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:20:01.577929Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"178.565459ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/node-controller\" limit:1 ","response":"range_response_count:1 size:195"}
	{"level":"info","ts":"2025-12-10T06:20:01.578029Z","caller":"traceutil/trace.go:172","msg":"trace[2068022794] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/node-controller; range_end:; response_count:1; response_revision:285; }","duration":"178.674806ms","start":"2025-12-10T06:20:01.399337Z","end":"2025-12-10T06:20:01.578012Z","steps":["trace[2068022794] 'range keys from in-memory index tree'  (duration: 178.41605ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T06:20:02.125934Z","caller":"traceutil/trace.go:172","msg":"trace[1392134239] transaction","detail":"{read_only:false; response_revision:289; number_of_response:1; }","duration":"126.489303ms","start":"2025-12-10T06:20:01.999419Z","end":"2025-12-10T06:20:02.125908Z","steps":["trace[1392134239] 'process raft request'  (duration: 126.388642ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-10T06:20:05.215426Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"113.903292ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873790581099013742 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-proxy-jqpjb.187fc64b24e4eaa8\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-proxy-jqpjb.187fc64b24e4eaa8\" value_size:633 lease:4650418544244237643 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-12-10T06:20:05.215610Z","caller":"traceutil/trace.go:172","msg":"trace[422589769] transaction","detail":"{read_only:false; response_revision:354; number_of_response:1; }","duration":"152.738509ms","start":"2025-12-10T06:20:05.062862Z","end":"2025-12-10T06:20:05.215600Z","steps":["trace[422589769] 'process raft request'  (duration: 152.670252ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T06:20:05.215647Z","caller":"traceutil/trace.go:172","msg":"trace[772312394] transaction","detail":"{read_only:false; response_revision:353; number_of_response:1; }","duration":"171.398733ms","start":"2025-12-10T06:20:05.044219Z","end":"2025-12-10T06:20:05.215617Z","steps":["trace[772312394] 'process raft request'  (duration: 57.023451ms)","trace[772312394] 'compare'  (duration: 113.801328ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-10T06:20:05.408852Z","caller":"traceutil/trace.go:172","msg":"trace[1656008090] linearizableReadLoop","detail":"{readStateIndex:363; appliedIndex:363; }","duration":"180.134723ms","start":"2025-12-10T06:20:05.228693Z","end":"2025-12-10T06:20:05.408828Z","steps":["trace[1656008090] 'read index received'  (duration: 180.105942ms)","trace[1656008090] 'applied index is now lower than readState.Index'  (duration: 27.946µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-10T06:20:05.430934Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"202.222218ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" limit:1 ","response":"range_response_count:1 size:4338"}
	{"level":"info","ts":"2025-12-10T06:20:05.431081Z","caller":"traceutil/trace.go:172","msg":"trace[1692918075] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:354; }","duration":"202.375746ms","start":"2025-12-10T06:20:05.228690Z","end":"2025-12-10T06:20:05.431066Z","steps":["trace[1692918075] 'agreement among raft nodes before linearized reading'  (duration: 180.227597ms)","trace[1692918075] 'range keys from in-memory index tree'  (duration: 21.907105ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-10T06:20:05.431112Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"157.291779ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-jqpjb\" limit:1 ","response":"range_response_count:1 size:5039"}
	{"level":"warn","ts":"2025-12-10T06:20:05.431117Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"202.40297ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-203121\" limit:1 ","response":"range_response_count:1 size:5560"}
	{"level":"info","ts":"2025-12-10T06:20:05.431148Z","caller":"traceutil/trace.go:172","msg":"trace[53438135] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-jqpjb; range_end:; response_count:1; response_revision:355; }","duration":"157.335995ms","start":"2025-12-10T06:20:05.273803Z","end":"2025-12-10T06:20:05.431139Z","steps":["trace[53438135] 'agreement among raft nodes before linearized reading'  (duration: 157.204511ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T06:20:05.431160Z","caller":"traceutil/trace.go:172","msg":"trace[1883692611] range","detail":"{range_begin:/registry/minions/pause-203121; range_end:; response_count:1; response_revision:355; }","duration":"202.445155ms","start":"2025-12-10T06:20:05.228698Z","end":"2025-12-10T06:20:05.431143Z","steps":["trace[1883692611] 'agreement among raft nodes before linearized reading'  (duration: 202.313271ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T06:20:05.431015Z","caller":"traceutil/trace.go:172","msg":"trace[404264850] transaction","detail":"{read_only:false; response_revision:355; number_of_response:1; }","duration":"212.399165ms","start":"2025-12-10T06:20:05.218590Z","end":"2025-12-10T06:20:05.430990Z","steps":["trace[404264850] 'process raft request'  (duration: 190.270673ms)","trace[404264850] 'compare'  (duration: 21.985742ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-10T06:20:49.959662Z","caller":"traceutil/trace.go:172","msg":"trace[374987701] transaction","detail":"{read_only:false; response_revision:412; number_of_response:1; }","duration":"104.244124ms","start":"2025-12-10T06:20:49.855399Z","end":"2025-12-10T06:20:49.959643Z","steps":["trace[374987701] 'process raft request'  (duration: 104.114555ms)"],"step_count":1}
	
	
	==> kernel <==
	 06:20:58 up  1:03,  0 user,  load average: 4.87, 3.31, 2.01
	Linux pause-203121 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4324a96acbf26610fa24d25a6b10deeebb9cddb7fb94f5dfde55488050951f4c] <==
	I1210 06:20:05.144605       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1210 06:20:05.144900       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1210 06:20:05.145057       1 main.go:148] setting mtu 1500 for CNI 
	I1210 06:20:05.145070       1 main.go:178] kindnetd IP family: "ipv4"
	I1210 06:20:05.145110       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-10T06:20:05Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1210 06:20:05.346449       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1210 06:20:05.346927       1 controller.go:381] "Waiting for informer caches to sync"
	I1210 06:20:05.346940       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1210 06:20:05.347207       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1210 06:20:35.347256       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1210 06:20:35.347558       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1210 06:20:35.347562       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1210 06:20:35.347719       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1210 06:20:36.947576       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1210 06:20:36.947612       1 metrics.go:72] Registering metrics
	I1210 06:20:36.947721       1 controller.go:711] "Syncing nftables rules"
	I1210 06:20:45.353394       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1210 06:20:45.353451       1 main.go:301] handling current node
	I1210 06:20:55.351599       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1210 06:20:55.351663       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b0ef753ac71a3588946b11e2247d60114c2ada8b6472fa9fe506e1f8d9b2576a] <==
	E1210 06:19:56.412697       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1210 06:19:56.464350       1 controller.go:667] quota admission added evaluator for: namespaces
	I1210 06:19:56.475353       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 06:19:56.481917       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	E1210 06:19:56.485293       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1210 06:19:56.503913       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 06:19:56.504006       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1210 06:19:56.599299       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1210 06:19:57.264431       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1210 06:19:57.269592       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1210 06:19:57.269615       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1210 06:19:58.015357       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1210 06:19:58.068874       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1210 06:19:58.176405       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1210 06:19:58.184785       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1210 06:19:58.186235       1 controller.go:667] quota admission added evaluator for: endpoints
	I1210 06:19:58.193059       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1210 06:19:58.401686       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1210 06:19:59.370104       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1210 06:19:59.383527       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1210 06:19:59.395242       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1210 06:20:04.058147       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1210 06:20:04.255443       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1210 06:20:04.307815       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 06:20:04.314664       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [9fed510c4454cb11f751b00c6dc02a48e1bb122a804caf714f4cbeae72fd6a05] <==
	I1210 06:20:03.459729       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 06:20:03.462933       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1210 06:20:03.471260       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1210 06:20:03.477601       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1210 06:20:03.480014       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1210 06:20:03.480044       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1210 06:20:03.480174       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1210 06:20:03.480268       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-203121"
	I1210 06:20:03.480375       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1210 06:20:03.481483       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1210 06:20:03.493304       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1210 06:20:03.494462       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1210 06:20:03.501940       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1210 06:20:03.506426       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1210 06:20:03.506449       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1210 06:20:03.508078       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1210 06:20:03.510341       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1210 06:20:03.510356       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1210 06:20:03.510362       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1210 06:20:03.512535       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 06:20:03.512625       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1210 06:20:03.517948       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1210 06:20:03.524400       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1210 06:20:03.559776       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-203121" podCIDRs=["10.244.0.0/24"]
	I1210 06:20:48.487961       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [e00866f864193cb02d7fa4e6e4fdbc6ad01fdffb3408406ad2b0a2f2ca7546ab] <==
	I1210 06:20:05.046858       1 server_linux.go:53] "Using iptables proxy"
	I1210 06:20:05.111328       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1210 06:20:05.212186       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1210 06:20:05.212235       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1210 06:20:05.212307       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 06:20:05.301257       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1210 06:20:05.301315       1 server_linux.go:132] "Using iptables Proxier"
	I1210 06:20:05.307971       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 06:20:05.308343       1 server.go:527] "Version info" version="v1.34.2"
	I1210 06:20:05.308365       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 06:20:05.309776       1 config.go:200] "Starting service config controller"
	I1210 06:20:05.309802       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 06:20:05.309814       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 06:20:05.309833       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 06:20:05.309835       1 config.go:106] "Starting endpoint slice config controller"
	I1210 06:20:05.309849       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 06:20:05.309865       1 config.go:309] "Starting node config controller"
	I1210 06:20:05.309871       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 06:20:05.309879       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 06:20:05.410650       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1210 06:20:05.411085       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1210 06:20:05.411177       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [f1f8b92df9fd1da6da75299621207a74d1d2035f97ce2dd8c961fcf715a4e7ec] <==
	E1210 06:19:56.506603       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1210 06:19:56.506778       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1210 06:19:56.506974       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1210 06:19:56.507104       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1210 06:19:56.507175       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1210 06:19:56.507239       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1210 06:19:56.507293       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1210 06:19:56.507349       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1210 06:19:56.507498       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1210 06:19:56.507653       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1210 06:19:56.507838       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1210 06:19:56.507928       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1210 06:19:56.512317       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1210 06:19:57.327441       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1210 06:19:57.352939       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1210 06:19:57.499636       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1210 06:19:57.551172       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1210 06:19:57.580220       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1210 06:19:57.608153       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1210 06:19:57.625564       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1210 06:19:57.633363       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1210 06:19:57.726117       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1210 06:19:57.730464       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1210 06:19:57.937350       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1210 06:20:00.290030       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 10 06:20:04 pause-203121 kubelet[1328]: I1210 06:20:04.337347    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pnfb\" (UniqueName: \"kubernetes.io/projected/f2260206-9397-4c0b-9d7d-5c59c7fde610-kube-api-access-6pnfb\") pod \"kindnet-qn46q\" (UID: \"f2260206-9397-4c0b-9d7d-5c59c7fde610\") " pod="kube-system/kindnet-qn46q"
	Dec 10 06:20:04 pause-203121 kubelet[1328]: I1210 06:20:04.337404    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3a5b610d-98d6-498d-84f7-e3edeaad1acf-xtables-lock\") pod \"kube-proxy-jqpjb\" (UID: \"3a5b610d-98d6-498d-84f7-e3edeaad1acf\") " pod="kube-system/kube-proxy-jqpjb"
	Dec 10 06:20:04 pause-203121 kubelet[1328]: I1210 06:20:04.337432    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3a5b610d-98d6-498d-84f7-e3edeaad1acf-lib-modules\") pod \"kube-proxy-jqpjb\" (UID: \"3a5b610d-98d6-498d-84f7-e3edeaad1acf\") " pod="kube-system/kube-proxy-jqpjb"
	Dec 10 06:20:04 pause-203121 kubelet[1328]: I1210 06:20:04.337494    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f2260206-9397-4c0b-9d7d-5c59c7fde610-lib-modules\") pod \"kindnet-qn46q\" (UID: \"f2260206-9397-4c0b-9d7d-5c59c7fde610\") " pod="kube-system/kindnet-qn46q"
	Dec 10 06:20:04 pause-203121 kubelet[1328]: I1210 06:20:04.337519    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hszqv\" (UniqueName: \"kubernetes.io/projected/3a5b610d-98d6-498d-84f7-e3edeaad1acf-kube-api-access-hszqv\") pod \"kube-proxy-jqpjb\" (UID: \"3a5b610d-98d6-498d-84f7-e3edeaad1acf\") " pod="kube-system/kube-proxy-jqpjb"
	Dec 10 06:20:04 pause-203121 kubelet[1328]: I1210 06:20:04.337543    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/f2260206-9397-4c0b-9d7d-5c59c7fde610-cni-cfg\") pod \"kindnet-qn46q\" (UID: \"f2260206-9397-4c0b-9d7d-5c59c7fde610\") " pod="kube-system/kindnet-qn46q"
	Dec 10 06:20:04 pause-203121 kubelet[1328]: I1210 06:20:04.337612    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f2260206-9397-4c0b-9d7d-5c59c7fde610-xtables-lock\") pod \"kindnet-qn46q\" (UID: \"f2260206-9397-4c0b-9d7d-5c59c7fde610\") " pod="kube-system/kindnet-qn46q"
	Dec 10 06:20:04 pause-203121 kubelet[1328]: I1210 06:20:04.337648    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3a5b610d-98d6-498d-84f7-e3edeaad1acf-kube-proxy\") pod \"kube-proxy-jqpjb\" (UID: \"3a5b610d-98d6-498d-84f7-e3edeaad1acf\") " pod="kube-system/kube-proxy-jqpjb"
	Dec 10 06:20:05 pause-203121 kubelet[1328]: I1210 06:20:05.520611    1328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jqpjb" podStartSLOduration=1.520586828 podStartE2EDuration="1.520586828s" podCreationTimestamp="2025-12-10 06:20:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 06:20:05.519951265 +0000 UTC m=+6.399575737" watchObservedRunningTime="2025-12-10 06:20:05.520586828 +0000 UTC m=+6.400211296"
	Dec 10 06:20:07 pause-203121 kubelet[1328]: I1210 06:20:07.171426    1328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-qn46q" podStartSLOduration=3.171399883 podStartE2EDuration="3.171399883s" podCreationTimestamp="2025-12-10 06:20:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 06:20:05.562832674 +0000 UTC m=+6.442457145" watchObservedRunningTime="2025-12-10 06:20:07.171399883 +0000 UTC m=+8.051024351"
	Dec 10 06:20:45 pause-203121 kubelet[1328]: I1210 06:20:45.829970    1328 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 10 06:20:45 pause-203121 kubelet[1328]: I1210 06:20:45.937522    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d75b8ab-fa07-448d-a04e-b1ebb0d07bff-config-volume\") pod \"coredns-66bc5c9577-j8lrj\" (UID: \"3d75b8ab-fa07-448d-a04e-b1ebb0d07bff\") " pod="kube-system/coredns-66bc5c9577-j8lrj"
	Dec 10 06:20:45 pause-203121 kubelet[1328]: I1210 06:20:45.937597    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7h4s\" (UniqueName: \"kubernetes.io/projected/3d75b8ab-fa07-448d-a04e-b1ebb0d07bff-kube-api-access-l7h4s\") pod \"coredns-66bc5c9577-j8lrj\" (UID: \"3d75b8ab-fa07-448d-a04e-b1ebb0d07bff\") " pod="kube-system/coredns-66bc5c9577-j8lrj"
	Dec 10 06:20:46 pause-203121 kubelet[1328]: I1210 06:20:46.410815    1328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-j8lrj" podStartSLOduration=42.410787257 podStartE2EDuration="42.410787257s" podCreationTimestamp="2025-12-10 06:20:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 06:20:46.392300366 +0000 UTC m=+47.271924847" watchObservedRunningTime="2025-12-10 06:20:46.410787257 +0000 UTC m=+47.290411727"
	Dec 10 06:20:49 pause-203121 kubelet[1328]: W1210 06:20:49.850195    1328 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 10 06:20:49 pause-203121 kubelet[1328]: E1210 06:20:49.850360    1328 log.go:32] "Version from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 10 06:20:49 pause-203121 kubelet[1328]: W1210 06:20:49.951423    1328 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 10 06:20:50 pause-203121 kubelet[1328]: W1210 06:20:50.122006    1328 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 10 06:20:50 pause-203121 kubelet[1328]: E1210 06:20:50.384362    1328 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Dec 10 06:20:50 pause-203121 kubelet[1328]: E1210 06:20:50.384449    1328 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 10 06:20:50 pause-203121 kubelet[1328]: E1210 06:20:50.384509    1328 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 10 06:20:55 pause-203121 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 10 06:20:55 pause-203121 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 10 06:20:55 pause-203121 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:20:55 pause-203121 systemd[1]: kubelet.service: Consumed 2.425s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-203121 -n pause-203121
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-203121 -n pause-203121: exit status 2 (362.621313ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-203121 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-203121
helpers_test.go:244: (dbg) docker inspect pause-203121:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c6c142fbb7f9e1e3e5b97c5ae6d40166f2e8184202f6506a13db0fb004e54f0e",
	        "Created": "2025-12-10T06:19:43.805927705Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 243199,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T06:19:43.871651642Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9dfcc37acf4d8ed51daae49d651516447e95ced4bb0b0783e8c53cb79a74f008",
	        "ResolvConfPath": "/var/lib/docker/containers/c6c142fbb7f9e1e3e5b97c5ae6d40166f2e8184202f6506a13db0fb004e54f0e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c6c142fbb7f9e1e3e5b97c5ae6d40166f2e8184202f6506a13db0fb004e54f0e/hostname",
	        "HostsPath": "/var/lib/docker/containers/c6c142fbb7f9e1e3e5b97c5ae6d40166f2e8184202f6506a13db0fb004e54f0e/hosts",
	        "LogPath": "/var/lib/docker/containers/c6c142fbb7f9e1e3e5b97c5ae6d40166f2e8184202f6506a13db0fb004e54f0e/c6c142fbb7f9e1e3e5b97c5ae6d40166f2e8184202f6506a13db0fb004e54f0e-json.log",
	        "Name": "/pause-203121",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-203121:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-203121",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c6c142fbb7f9e1e3e5b97c5ae6d40166f2e8184202f6506a13db0fb004e54f0e",
	                "LowerDir": "/var/lib/docker/overlay2/d66366d2ed37cc38ebba005085ae77de341be11633e7a0a8a693e0307af19d59-init/diff:/var/lib/docker/overlay2/5745aee6e8b05b3a4cc4ad6aee891df9d6438d830895f70bd2a764a976802708/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d66366d2ed37cc38ebba005085ae77de341be11633e7a0a8a693e0307af19d59/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d66366d2ed37cc38ebba005085ae77de341be11633e7a0a8a693e0307af19d59/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d66366d2ed37cc38ebba005085ae77de341be11633e7a0a8a693e0307af19d59/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-203121",
	                "Source": "/var/lib/docker/volumes/pause-203121/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-203121",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-203121",
	                "name.minikube.sigs.k8s.io": "pause-203121",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "c187516cefe69bb383bcdfa9002f0e2ac0f29d3b12db4ee9290faf735425067a",
	            "SandboxKey": "/var/run/docker/netns/c187516cefe6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33054"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33055"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33058"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33056"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33057"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-203121": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e492d46712ecdcb5aaa4dd1d5f297eefd549161039fd49ae01777e7980eb8128",
	                    "EndpointID": "c4057375f99a0d0e05bba9ae49c34b269f87c68d4a0ff6f812a197fc145e6a2b",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "36:7a:56:74:89:21",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-203121",
	                        "c6c142fbb7f9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-203121 -n pause-203121
I1210 06:20:59.049929   12374 config.go:182] Loaded profile config "custom-flannel-201263": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-203121 -n pause-203121: exit status 2 (378.180929ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p pause-203121 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p pause-203121 logs -n 25: (1.260258803s)
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                ARGS                                                                                │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p kubernetes-upgrade-800617                                                                                                                                       │ kubernetes-upgrade-800617 │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ start   │ -p stopped-upgrade-709856 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                                               │ stopped-upgrade-709856    │ jenkins │ v1.35.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ start   │ -p running-upgrade-538113 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                           │ running-upgrade-538113    │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ start   │ -p missing-upgrade-490462 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                           │ missing-upgrade-490462    │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ stop    │ stopped-upgrade-709856 stop                                                                                                                                        │ stopped-upgrade-709856    │ jenkins │ v1.35.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ start   │ -p stopped-upgrade-709856 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                           │ stopped-upgrade-709856    │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ delete  │ -p running-upgrade-538113                                                                                                                                          │ running-upgrade-538113    │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ start   │ -p pause-203121 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                                          │ pause-203121              │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:20 UTC │
	│ delete  │ -p stopped-upgrade-709856                                                                                                                                          │ stopped-upgrade-709856    │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ start   │ -p cert-expiration-936135 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                          │ cert-expiration-936135    │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:20 UTC │
	│ delete  │ -p missing-upgrade-490462                                                                                                                                          │ missing-upgrade-490462    │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ start   │ -p auto-201263 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                            │ auto-201263               │ jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:20 UTC │
	│ start   │ -p calico-201263 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio                             │ calico-201263             │ jenkins │ v1.37.0 │ 10 Dec 25 06:20 UTC │ 10 Dec 25 06:20 UTC │
	│ delete  │ -p cert-expiration-936135                                                                                                                                          │ cert-expiration-936135    │ jenkins │ v1.37.0 │ 10 Dec 25 06:20 UTC │ 10 Dec 25 06:20 UTC │
	│ start   │ -p custom-flannel-201263 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio │ custom-flannel-201263     │ jenkins │ v1.37.0 │ 10 Dec 25 06:20 UTC │ 10 Dec 25 06:20 UTC │
	│ ssh     │ -p auto-201263 pgrep -a kubelet                                                                                                                                    │ auto-201263               │ jenkins │ v1.37.0 │ 10 Dec 25 06:20 UTC │ 10 Dec 25 06:20 UTC │
	│ start   │ -p pause-203121 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                   │ pause-203121              │ jenkins │ v1.37.0 │ 10 Dec 25 06:20 UTC │ 10 Dec 25 06:20 UTC │
	│ pause   │ -p pause-203121 --alsologtostderr -v=5                                                                                                                             │ pause-203121              │ jenkins │ v1.37.0 │ 10 Dec 25 06:20 UTC │                     │
	│ ssh     │ -p calico-201263 pgrep -a kubelet                                                                                                                                  │ calico-201263             │ jenkins │ v1.37.0 │ 10 Dec 25 06:20 UTC │ 10 Dec 25 06:20 UTC │
	│ ssh     │ -p auto-201263 sudo cat /etc/nsswitch.conf                                                                                                                         │ auto-201263               │ jenkins │ v1.37.0 │ 10 Dec 25 06:20 UTC │ 10 Dec 25 06:20 UTC │
	│ ssh     │ -p auto-201263 sudo cat /etc/hosts                                                                                                                                 │ auto-201263               │ jenkins │ v1.37.0 │ 10 Dec 25 06:20 UTC │ 10 Dec 25 06:20 UTC │
	│ ssh     │ -p auto-201263 sudo cat /etc/resolv.conf                                                                                                                           │ auto-201263               │ jenkins │ v1.37.0 │ 10 Dec 25 06:20 UTC │ 10 Dec 25 06:20 UTC │
	│ ssh     │ -p auto-201263 sudo crictl pods                                                                                                                                    │ auto-201263               │ jenkins │ v1.37.0 │ 10 Dec 25 06:20 UTC │                     │
	│ ssh     │ -p custom-flannel-201263 pgrep -a kubelet                                                                                                                          │ custom-flannel-201263     │ jenkins │ v1.37.0 │ 10 Dec 25 06:20 UTC │ 10 Dec 25 06:20 UTC │
	│ ssh     │ -p auto-201263 sudo crictl ps --all                                                                                                                                │ auto-201263               │ jenkins │ v1.37.0 │ 10 Dec 25 06:20 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:20:48
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:20:48.519285  259888 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:20:48.519417  259888 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:20:48.519429  259888 out.go:374] Setting ErrFile to fd 2...
	I1210 06:20:48.519435  259888 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:20:48.519662  259888 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8832/.minikube/bin
	I1210 06:20:48.520188  259888 out.go:368] Setting JSON to false
	I1210 06:20:48.521661  259888 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3800,"bootTime":1765343849,"procs":369,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 06:20:48.521724  259888 start.go:143] virtualization: kvm guest
	I1210 06:20:48.524198  259888 out.go:179] * [pause-203121] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 06:20:48.525797  259888 notify.go:221] Checking for updates...
	I1210 06:20:48.525826  259888 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 06:20:48.527873  259888 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:20:48.529454  259888 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-8832/kubeconfig
	I1210 06:20:48.530991  259888 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-8832/.minikube
	I1210 06:20:48.532426  259888 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 06:20:48.533892  259888 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:20:48.536047  259888 config.go:182] Loaded profile config "pause-203121": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 06:20:48.536752  259888 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:20:48.563635  259888 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 06:20:48.563720  259888 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:20:48.626860  259888 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:84 SystemTime:2025-12-10 06:20:48.613771919 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:20:48.627016  259888 docker.go:319] overlay module found
	I1210 06:20:48.628922  259888 out.go:179] * Using the docker driver based on existing profile
	I1210 06:20:48.630221  259888 start.go:309] selected driver: docker
	I1210 06:20:48.630236  259888 start.go:927] validating driver "docker" against &{Name:pause-203121 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-203121 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false reg
istry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:20:48.630357  259888 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:20:48.630438  259888 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:20:48.685332  259888 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:76 OomKillDisable:false NGoroutines:84 SystemTime:2025-12-10 06:20:48.675301563 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:20:48.686057  259888 cni.go:84] Creating CNI manager for ""
	I1210 06:20:48.686137  259888 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:20:48.686192  259888 start.go:353] cluster config:
	{Name:pause-203121 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-203121 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:fals
e storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:20:48.688406  259888 out.go:179] * Starting "pause-203121" primary control-plane node in "pause-203121" cluster
	I1210 06:20:48.689819  259888 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 06:20:48.691092  259888 out.go:179] * Pulling base image v0.0.48-1765319469-22089 ...
	I1210 06:20:48.692410  259888 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 06:20:48.692457  259888 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-8832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1210 06:20:48.692488  259888 cache.go:65] Caching tarball of preloaded images
	I1210 06:20:48.692548  259888 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon
	I1210 06:20:48.692596  259888 preload.go:238] Found /home/jenkins/minikube-integration/22089-8832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 06:20:48.692612  259888 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1210 06:20:48.692767  259888 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/pause-203121/config.json ...
	I1210 06:20:48.714642  259888 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon, skipping pull
	I1210 06:20:48.714667  259888 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca exists in daemon, skipping load
	I1210 06:20:48.714687  259888 cache.go:243] Successfully downloaded all kic artifacts
	I1210 06:20:48.714722  259888 start.go:360] acquireMachinesLock for pause-203121: {Name:mk1bee09c0ae4144013617ea77d79f9f746f3d96 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:20:48.714792  259888 start.go:364] duration metric: took 44.733µs to acquireMachinesLock for "pause-203121"
	I1210 06:20:48.714816  259888 start.go:96] Skipping create...Using existing machine configuration
	I1210 06:20:48.714825  259888 fix.go:54] fixHost starting: 
	I1210 06:20:48.715030  259888 cli_runner.go:164] Run: docker container inspect pause-203121 --format={{.State.Status}}
	I1210 06:20:48.733826  259888 fix.go:112] recreateIfNeeded on pause-203121: state=Running err=<nil>
	W1210 06:20:48.733858  259888 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 06:20:47.541232  249014 system_pods.go:86] 9 kube-system pods found
	I1210 06:20:47.541263  249014 system_pods.go:89] "calico-kube-controllers-5c676f698c-j8l7d" [24904c80-7f3b-4329-ac32-7b6f5b5ab3c2] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1210 06:20:47.541271  249014 system_pods.go:89] "calico-node-sndmq" [0a9fda50-386f-4548-b300-0f2b61dfb24a] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1210 06:20:47.541279  249014 system_pods.go:89] "coredns-66bc5c9577-9s7ld" [5d303937-d9b7-4946-b8ef-45e65b5f04f1] Running
	I1210 06:20:47.541283  249014 system_pods.go:89] "etcd-calico-201263" [a33c1442-9339-4b03-93ab-034382a4ab11] Running
	I1210 06:20:47.541287  249014 system_pods.go:89] "kube-apiserver-calico-201263" [d586d73a-8d14-4a43-b182-9abf1e7d659f] Running
	I1210 06:20:47.541291  249014 system_pods.go:89] "kube-controller-manager-calico-201263" [962a3c12-8790-4f9e-9ea3-aba14c4c5972] Running
	I1210 06:20:47.541295  249014 system_pods.go:89] "kube-proxy-7bwmh" [d08030b5-2aed-4336-b38b-e4213293ac7e] Running
	I1210 06:20:47.541300  249014 system_pods.go:89] "kube-scheduler-calico-201263" [fdcfeefb-e366-46c5-aed3-c883a3ced741] Running
	I1210 06:20:47.541305  249014 system_pods.go:89] "storage-provisioner" [817f9fad-fc46-4b94-96b7-dda29c164525] Running
	I1210 06:20:47.541318  249014 system_pods.go:126] duration metric: took 14.158550887s to wait for k8s-apps to be running ...
	I1210 06:20:47.541330  249014 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 06:20:47.541383  249014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:20:47.556623  249014 system_svc.go:56] duration metric: took 15.285294ms WaitForService to wait for kubelet
	I1210 06:20:47.556653  249014 kubeadm.go:587] duration metric: took 19.086544011s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 06:20:47.556673  249014 node_conditions.go:102] verifying NodePressure condition ...
	I1210 06:20:47.560018  249014 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1210 06:20:47.560069  249014 node_conditions.go:123] node cpu capacity is 8
	I1210 06:20:47.560084  249014 node_conditions.go:105] duration metric: took 3.405879ms to run NodePressure ...
	I1210 06:20:47.560099  249014 start.go:242] waiting for startup goroutines ...
	I1210 06:20:47.560112  249014 start.go:247] waiting for cluster config update ...
	I1210 06:20:47.560126  249014 start.go:256] writing updated cluster config ...
	I1210 06:20:47.560493  249014 ssh_runner.go:195] Run: rm -f paused
	I1210 06:20:47.564976  249014 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 06:20:47.568729  249014 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-9s7ld" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:20:47.573242  249014 pod_ready.go:94] pod "coredns-66bc5c9577-9s7ld" is "Ready"
	I1210 06:20:47.573264  249014 pod_ready.go:86] duration metric: took 4.509237ms for pod "coredns-66bc5c9577-9s7ld" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:20:47.575262  249014 pod_ready.go:83] waiting for pod "etcd-calico-201263" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:20:47.579508  249014 pod_ready.go:94] pod "etcd-calico-201263" is "Ready"
	I1210 06:20:47.579531  249014 pod_ready.go:86] duration metric: took 4.249744ms for pod "etcd-calico-201263" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:20:47.581696  249014 pod_ready.go:83] waiting for pod "kube-apiserver-calico-201263" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:20:47.585829  249014 pod_ready.go:94] pod "kube-apiserver-calico-201263" is "Ready"
	I1210 06:20:47.585854  249014 pod_ready.go:86] duration metric: took 4.136871ms for pod "kube-apiserver-calico-201263" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:20:47.587776  249014 pod_ready.go:83] waiting for pod "kube-controller-manager-calico-201263" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:20:47.969151  249014 pod_ready.go:94] pod "kube-controller-manager-calico-201263" is "Ready"
	I1210 06:20:47.969175  249014 pod_ready.go:86] duration metric: took 381.378433ms for pod "kube-controller-manager-calico-201263" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:20:48.169295  249014 pod_ready.go:83] waiting for pod "kube-proxy-7bwmh" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:20:48.569807  249014 pod_ready.go:94] pod "kube-proxy-7bwmh" is "Ready"
	I1210 06:20:48.569842  249014 pod_ready.go:86] duration metric: took 400.515607ms for pod "kube-proxy-7bwmh" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:20:48.769914  249014 pod_ready.go:83] waiting for pod "kube-scheduler-calico-201263" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:20:49.169605  249014 pod_ready.go:94] pod "kube-scheduler-calico-201263" is "Ready"
	I1210 06:20:49.169630  249014 pod_ready.go:86] duration metric: took 399.519093ms for pod "kube-scheduler-calico-201263" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:20:49.169642  249014 pod_ready.go:40] duration metric: took 1.604635835s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 06:20:49.215058  249014 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1210 06:20:49.217437  249014 out.go:179] * Done! kubectl is now configured to use "calico-201263" cluster and "default" namespace by default
	I1210 06:20:48.736763  259888 out.go:252] * Updating the running docker "pause-203121" container ...
	I1210 06:20:48.736808  259888 machine.go:94] provisionDockerMachine start ...
	I1210 06:20:48.736909  259888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-203121
	I1210 06:20:48.760774  259888 main.go:143] libmachine: Using SSH client type: native
	I1210 06:20:48.761192  259888 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33054 <nil> <nil>}
	I1210 06:20:48.761218  259888 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 06:20:48.896775  259888 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-203121
	
	I1210 06:20:48.896805  259888 ubuntu.go:182] provisioning hostname "pause-203121"
	I1210 06:20:48.896869  259888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-203121
	I1210 06:20:48.917591  259888 main.go:143] libmachine: Using SSH client type: native
	I1210 06:20:48.917846  259888 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33054 <nil> <nil>}
	I1210 06:20:48.917864  259888 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-203121 && echo "pause-203121" | sudo tee /etc/hostname
	I1210 06:20:49.061986  259888 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-203121
	
	I1210 06:20:49.062044  259888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-203121
	I1210 06:20:49.080968  259888 main.go:143] libmachine: Using SSH client type: native
	I1210 06:20:49.081226  259888 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33054 <nil> <nil>}
	I1210 06:20:49.081252  259888 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-203121' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-203121/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-203121' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 06:20:49.213699  259888 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:20:49.213727  259888 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22089-8832/.minikube CaCertPath:/home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22089-8832/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22089-8832/.minikube}
	I1210 06:20:49.213751  259888 ubuntu.go:190] setting up certificates
	I1210 06:20:49.213765  259888 provision.go:84] configureAuth start
	I1210 06:20:49.213847  259888 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-203121
	I1210 06:20:49.233563  259888 provision.go:143] copyHostCerts
	I1210 06:20:49.233639  259888 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-8832/.minikube/key.pem, removing ...
	I1210 06:20:49.233657  259888 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-8832/.minikube/key.pem
	I1210 06:20:49.233752  259888 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22089-8832/.minikube/key.pem (1675 bytes)
	I1210 06:20:49.234279  259888 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-8832/.minikube/ca.pem, removing ...
	I1210 06:20:49.234295  259888 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-8832/.minikube/ca.pem
	I1210 06:20:49.234351  259888 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22089-8832/.minikube/ca.pem (1078 bytes)
	I1210 06:20:49.234503  259888 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-8832/.minikube/cert.pem, removing ...
	I1210 06:20:49.234511  259888 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-8832/.minikube/cert.pem
	I1210 06:20:49.234551  259888 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22089-8832/.minikube/cert.pem (1123 bytes)
	I1210 06:20:49.234655  259888 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22089-8832/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca-key.pem org=jenkins.pause-203121 san=[127.0.0.1 192.168.103.2 localhost minikube pause-203121]
	I1210 06:20:49.399505  259888 provision.go:177] copyRemoteCerts
	I1210 06:20:49.399570  259888 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 06:20:49.399602  259888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-203121
	I1210 06:20:49.419207  259888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33054 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/pause-203121/id_rsa Username:docker}
	I1210 06:20:49.520485  259888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 06:20:49.540996  259888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 06:20:49.560718  259888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1210 06:20:49.580257  259888 provision.go:87] duration metric: took 366.466467ms to configureAuth
	I1210 06:20:49.580290  259888 ubuntu.go:206] setting minikube options for container-runtime
	I1210 06:20:49.580524  259888 config.go:182] Loaded profile config "pause-203121": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 06:20:49.580630  259888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-203121
	I1210 06:20:49.600382  259888 main.go:143] libmachine: Using SSH client type: native
	I1210 06:20:49.600638  259888 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33054 <nil> <nil>}
	I1210 06:20:49.600656  259888 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 06:20:50.314133  259888 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 06:20:50.314156  259888 machine.go:97] duration metric: took 1.577339765s to provisionDockerMachine
	I1210 06:20:50.314166  259888 start.go:293] postStartSetup for "pause-203121" (driver="docker")
	I1210 06:20:50.314176  259888 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 06:20:50.314241  259888 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 06:20:50.314285  259888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-203121
	I1210 06:20:50.334582  259888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33054 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/pause-203121/id_rsa Username:docker}
	I1210 06:20:50.431556  259888 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 06:20:50.435575  259888 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 06:20:50.435602  259888 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 06:20:50.435613  259888 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-8832/.minikube/addons for local assets ...
	I1210 06:20:50.435670  259888 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-8832/.minikube/files for local assets ...
	I1210 06:20:50.435756  259888 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-8832/.minikube/files/etc/ssl/certs/123742.pem -> 123742.pem in /etc/ssl/certs
	I1210 06:20:50.435893  259888 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 06:20:50.444542  259888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/files/etc/ssl/certs/123742.pem --> /etc/ssl/certs/123742.pem (1708 bytes)
	I1210 06:20:50.464558  259888 start.go:296] duration metric: took 150.378766ms for postStartSetup
	I1210 06:20:50.464649  259888 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:20:50.464695  259888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-203121
	I1210 06:20:50.485148  259888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33054 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/pause-203121/id_rsa Username:docker}
	I1210 06:20:50.581599  259888 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 06:20:50.587391  259888 fix.go:56] duration metric: took 1.872551197s for fixHost
	I1210 06:20:50.587420  259888 start.go:83] releasing machines lock for "pause-203121", held for 1.872615096s
	I1210 06:20:50.587502  259888 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-203121
	I1210 06:20:50.606810  259888 ssh_runner.go:195] Run: cat /version.json
	I1210 06:20:50.606858  259888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-203121
	I1210 06:20:50.606923  259888 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 06:20:50.606993  259888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-203121
	I1210 06:20:50.626907  259888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33054 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/pause-203121/id_rsa Username:docker}
	I1210 06:20:50.627554  259888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33054 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/pause-203121/id_rsa Username:docker}
	I1210 06:20:50.719108  259888 ssh_runner.go:195] Run: systemctl --version
	I1210 06:20:50.786355  259888 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 06:20:50.825747  259888 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 06:20:50.830941  259888 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 06:20:50.830997  259888 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 06:20:50.839910  259888 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 06:20:50.839938  259888 start.go:496] detecting cgroup driver to use...
	I1210 06:20:50.839981  259888 detect.go:190] detected "systemd" cgroup driver on host os
	I1210 06:20:50.840046  259888 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 06:20:50.855954  259888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 06:20:50.869260  259888 docker.go:218] disabling cri-docker service (if available) ...
	I1210 06:20:50.869333  259888 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 06:20:50.885349  259888 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 06:20:50.900195  259888 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 06:20:51.022729  259888 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 06:20:51.135014  259888 docker.go:234] disabling docker service ...
	I1210 06:20:51.135075  259888 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 06:20:51.150508  259888 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 06:20:51.163515  259888 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 06:20:51.277521  259888 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 06:20:51.393911  259888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 06:20:51.407181  259888 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:20:51.422181  259888 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 06:20:51.422270  259888 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:20:51.431988  259888 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1210 06:20:51.432038  259888 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:20:51.441880  259888 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:20:51.452239  259888 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:20:51.462072  259888 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 06:20:51.472098  259888 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:20:51.482886  259888 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:20:51.493228  259888 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:20:51.504329  259888 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 06:20:51.513022  259888 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 06:20:51.522030  259888 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:20:51.638728  259888 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 06:20:51.864149  259888 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 06:20:51.864259  259888 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 06:20:51.868690  259888 start.go:564] Will wait 60s for crictl version
	I1210 06:20:51.868753  259888 ssh_runner.go:195] Run: which crictl
	I1210 06:20:51.873406  259888 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 06:20:51.903298  259888 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 06:20:51.903409  259888 ssh_runner.go:195] Run: crio --version
	I1210 06:20:51.934944  259888 ssh_runner.go:195] Run: crio --version
	I1210 06:20:51.969771  259888 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1210 06:20:47.967004  252480 system_pods.go:86] 7 kube-system pods found
	I1210 06:20:47.967034  252480 system_pods.go:89] "coredns-66bc5c9577-r7p5t" [97853ca3-8982-4324-a9f2-005209f7a2dd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 06:20:47.967040  252480 system_pods.go:89] "etcd-custom-flannel-201263" [a7895f5c-56b2-4262-b678-eab987e9faa4] Running
	I1210 06:20:47.967047  252480 system_pods.go:89] "kube-apiserver-custom-flannel-201263" [99edbcf6-7869-4984-b2b8-047a3bd5b219] Running
	I1210 06:20:47.967052  252480 system_pods.go:89] "kube-controller-manager-custom-flannel-201263" [efe68542-32c5-454b-aafb-f638325cac55] Running
	I1210 06:20:47.967056  252480 system_pods.go:89] "kube-proxy-lmwlf" [f64f811e-2c3a-4ace-bf39-4dfae0bf9e48] Running
	I1210 06:20:47.967062  252480 system_pods.go:89] "kube-scheduler-custom-flannel-201263" [f2835dd2-8d60-48e3-b5d0-bd3908ce76db] Running
	I1210 06:20:47.967067  252480 system_pods.go:89] "storage-provisioner" [f53483dd-3c55-479a-b395-e6caefa2136d] Running
	I1210 06:20:47.967083  252480 retry.go:31] will retry after 2.081778836s: missing components: kube-dns
	I1210 06:20:50.053527  252480 system_pods.go:86] 7 kube-system pods found
	I1210 06:20:50.053565  252480 system_pods.go:89] "coredns-66bc5c9577-r7p5t" [97853ca3-8982-4324-a9f2-005209f7a2dd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 06:20:50.053572  252480 system_pods.go:89] "etcd-custom-flannel-201263" [a7895f5c-56b2-4262-b678-eab987e9faa4] Running
	I1210 06:20:50.053582  252480 system_pods.go:89] "kube-apiserver-custom-flannel-201263" [99edbcf6-7869-4984-b2b8-047a3bd5b219] Running
	I1210 06:20:50.053588  252480 system_pods.go:89] "kube-controller-manager-custom-flannel-201263" [efe68542-32c5-454b-aafb-f638325cac55] Running
	I1210 06:20:50.053594  252480 system_pods.go:89] "kube-proxy-lmwlf" [f64f811e-2c3a-4ace-bf39-4dfae0bf9e48] Running
	I1210 06:20:50.053600  252480 system_pods.go:89] "kube-scheduler-custom-flannel-201263" [f2835dd2-8d60-48e3-b5d0-bd3908ce76db] Running
	I1210 06:20:50.053604  252480 system_pods.go:89] "storage-provisioner" [f53483dd-3c55-479a-b395-e6caefa2136d] Running
	I1210 06:20:50.053622  252480 retry.go:31] will retry after 3.048460615s: missing components: kube-dns
	I1210 06:20:51.971269  259888 cli_runner.go:164] Run: docker network inspect pause-203121 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:20:51.994209  259888 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1210 06:20:51.998892  259888 kubeadm.go:884] updating cluster {Name:pause-203121 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-203121 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regis
try-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 06:20:51.999067  259888 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 06:20:51.999126  259888 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:20:52.040183  259888 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 06:20:52.040206  259888 crio.go:433] Images already preloaded, skipping extraction
	I1210 06:20:52.040342  259888 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:20:52.071028  259888 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 06:20:52.071092  259888 cache_images.go:86] Images are preloaded, skipping loading
	I1210 06:20:52.071101  259888 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.2 crio true true} ...
	I1210 06:20:52.071234  259888 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-203121 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-203121 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 06:20:52.071319  259888 ssh_runner.go:195] Run: crio config
	I1210 06:20:52.125645  259888 cni.go:84] Creating CNI manager for ""
	I1210 06:20:52.125672  259888 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:20:52.125690  259888 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 06:20:52.125728  259888 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-203121 NodeName:pause-203121 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 06:20:52.125881  259888 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-203121"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 06:20:52.125955  259888 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1210 06:20:52.134938  259888 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 06:20:52.135008  259888 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 06:20:52.143288  259888 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1210 06:20:52.156909  259888 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 06:20:52.170031  259888 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1210 06:20:52.183189  259888 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1210 06:20:52.187335  259888 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:20:52.303320  259888 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:20:52.317369  259888 certs.go:69] Setting up /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/pause-203121 for IP: 192.168.103.2
	I1210 06:20:52.317392  259888 certs.go:195] generating shared ca certs ...
	I1210 06:20:52.317411  259888 certs.go:227] acquiring lock for ca certs: {Name:mkfe434cecfa5233603e8d01fb39a21abb4f8ca4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:20:52.317579  259888 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22089-8832/.minikube/ca.key
	I1210 06:20:52.317621  259888 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22089-8832/.minikube/proxy-client-ca.key
	I1210 06:20:52.317631  259888 certs.go:257] generating profile certs ...
	I1210 06:20:52.317711  259888 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/pause-203121/client.key
	I1210 06:20:52.317768  259888 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/pause-203121/apiserver.key.d41d005a
	I1210 06:20:52.317806  259888 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/pause-203121/proxy-client.key
	I1210 06:20:52.317913  259888 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/12374.pem (1338 bytes)
	W1210 06:20:52.317954  259888 certs.go:480] ignoring /home/jenkins/minikube-integration/22089-8832/.minikube/certs/12374_empty.pem, impossibly tiny 0 bytes
	I1210 06:20:52.317963  259888 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca-key.pem (1679 bytes)
	I1210 06:20:52.317991  259888 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem (1078 bytes)
	I1210 06:20:52.318017  259888 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/cert.pem (1123 bytes)
	I1210 06:20:52.318040  259888 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/key.pem (1675 bytes)
	I1210 06:20:52.318079  259888 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/files/etc/ssl/certs/123742.pem (1708 bytes)
	I1210 06:20:52.318851  259888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 06:20:52.338548  259888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 06:20:52.357560  259888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 06:20:52.378666  259888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 06:20:52.398841  259888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/pause-203121/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1210 06:20:52.417441  259888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/pause-203121/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 06:20:52.436599  259888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/pause-203121/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 06:20:52.457021  259888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/pause-203121/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 06:20:52.479290  259888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 06:20:52.502987  259888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/certs/12374.pem --> /usr/share/ca-certificates/12374.pem (1338 bytes)
	I1210 06:20:52.524016  259888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/files/etc/ssl/certs/123742.pem --> /usr/share/ca-certificates/123742.pem (1708 bytes)
	I1210 06:20:52.546933  259888 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 06:20:52.562607  259888 ssh_runner.go:195] Run: openssl version
	I1210 06:20:52.570738  259888 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12374.pem
	I1210 06:20:52.579887  259888 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12374.pem /etc/ssl/certs/12374.pem
	I1210 06:20:52.588288  259888 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12374.pem
	I1210 06:20:52.592503  259888 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:52 /usr/share/ca-certificates/12374.pem
	I1210 06:20:52.592561  259888 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12374.pem
	I1210 06:20:52.629367  259888 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 06:20:52.637576  259888 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/123742.pem
	I1210 06:20:52.646369  259888 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/123742.pem /etc/ssl/certs/123742.pem
	I1210 06:20:52.655793  259888 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/123742.pem
	I1210 06:20:52.659987  259888 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:52 /usr/share/ca-certificates/123742.pem
	I1210 06:20:52.660052  259888 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/123742.pem
	I1210 06:20:52.699093  259888 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 06:20:52.707497  259888 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:20:52.715697  259888 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 06:20:52.723733  259888 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:20:52.727775  259888 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:20:52.727838  259888 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:20:52.764439  259888 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 06:20:52.773129  259888 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:20:52.777191  259888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 06:20:52.814779  259888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 06:20:52.849520  259888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 06:20:52.886341  259888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 06:20:52.923917  259888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 06:20:52.960431  259888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 06:20:52.999023  259888 kubeadm.go:401] StartCluster: {Name:pause-203121 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-203121 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry
-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:20:52.999159  259888 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 06:20:52.999221  259888 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:20:53.029794  259888 cri.go:89] found id: "cf62f9e28d4439c6626f971c222c28ef61e7c99dca09cee86fc50eb02f1f11e7"
	I1210 06:20:53.029815  259888 cri.go:89] found id: "e00866f864193cb02d7fa4e6e4fdbc6ad01fdffb3408406ad2b0a2f2ca7546ab"
	I1210 06:20:53.029822  259888 cri.go:89] found id: "4324a96acbf26610fa24d25a6b10deeebb9cddb7fb94f5dfde55488050951f4c"
	I1210 06:20:53.029827  259888 cri.go:89] found id: "b0ef753ac71a3588946b11e2247d60114c2ada8b6472fa9fe506e1f8d9b2576a"
	I1210 06:20:53.029832  259888 cri.go:89] found id: "6f2d0d213957beac3c690eeacb3151c1192c461d8284e6a53b4cfecdd4a17add"
	I1210 06:20:53.029836  259888 cri.go:89] found id: "f1f8b92df9fd1da6da75299621207a74d1d2035f97ce2dd8c961fcf715a4e7ec"
	I1210 06:20:53.029840  259888 cri.go:89] found id: "9fed510c4454cb11f751b00c6dc02a48e1bb122a804caf714f4cbeae72fd6a05"
	I1210 06:20:53.029844  259888 cri.go:89] found id: ""
	I1210 06:20:53.029892  259888 ssh_runner.go:195] Run: sudo runc list -f json
	W1210 06:20:53.042327  259888 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:20:53Z" level=error msg="open /run/runc: no such file or directory"
	I1210 06:20:53.042398  259888 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 06:20:53.051326  259888 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 06:20:53.051344  259888 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 06:20:53.051392  259888 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 06:20:53.059639  259888 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:20:53.060448  259888 kubeconfig.go:125] found "pause-203121" server: "https://192.168.103.2:8443"
	I1210 06:20:53.061648  259888 kapi.go:59] client config for pause-203121: &rest.Config{Host:"https://192.168.103.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22089-8832/.minikube/profiles/pause-203121/client.crt", KeyFile:"/home/jenkins/minikube-integration/22089-8832/.minikube/profiles/pause-203121/client.key", CAFile:"/home/jenkins/minikube-integration/22089-8832/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string
(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 06:20:53.062062  259888 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1210 06:20:53.062079  259888 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1210 06:20:53.062084  259888 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1210 06:20:53.062088  259888 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1210 06:20:53.062092  259888 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1210 06:20:53.062400  259888 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 06:20:53.070708  259888 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1210 06:20:53.070744  259888 kubeadm.go:602] duration metric: took 19.393056ms to restartPrimaryControlPlane
	I1210 06:20:53.070754  259888 kubeadm.go:403] duration metric: took 71.740024ms to StartCluster
	I1210 06:20:53.070771  259888 settings.go:142] acquiring lock: {Name:mkcfa52e2e09cf8266d26c2d1d1f162454a79515 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:20:53.070832  259888 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22089-8832/kubeconfig
	I1210 06:20:53.072032  259888 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/kubeconfig: {Name:mk2d0febd8c6a30a71f02d20e2057fd6d147cd76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:20:53.072312  259888 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 06:20:53.072403  259888 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 06:20:53.072561  259888 config.go:182] Loaded profile config "pause-203121": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 06:20:53.074913  259888 out.go:179] * Verifying Kubernetes components...
	I1210 06:20:53.074915  259888 out.go:179] * Enabled addons: 
	I1210 06:20:53.076715  259888 addons.go:530] duration metric: took 4.317666ms for enable addons: enabled=[]
	I1210 06:20:53.076751  259888 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:20:53.188259  259888 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:20:53.204215  259888 node_ready.go:35] waiting up to 6m0s for node "pause-203121" to be "Ready" ...
	I1210 06:20:53.212415  259888 node_ready.go:49] node "pause-203121" is "Ready"
	I1210 06:20:53.212447  259888 node_ready.go:38] duration metric: took 8.194815ms for node "pause-203121" to be "Ready" ...
	I1210 06:20:53.212462  259888 api_server.go:52] waiting for apiserver process to appear ...
	I1210 06:20:53.212546  259888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:20:53.225078  259888 api_server.go:72] duration metric: took 152.723936ms to wait for apiserver process to appear ...
	I1210 06:20:53.225111  259888 api_server.go:88] waiting for apiserver healthz status ...
	I1210 06:20:53.225134  259888 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1210 06:20:53.230448  259888 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1210 06:20:53.231499  259888 api_server.go:141] control plane version: v1.34.2
	I1210 06:20:53.231527  259888 api_server.go:131] duration metric: took 6.409801ms to wait for apiserver health ...
	I1210 06:20:53.231536  259888 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 06:20:53.235202  259888 system_pods.go:59] 7 kube-system pods found
	I1210 06:20:53.235255  259888 system_pods.go:61] "coredns-66bc5c9577-j8lrj" [3d75b8ab-fa07-448d-a04e-b1ebb0d07bff] Running
	I1210 06:20:53.235263  259888 system_pods.go:61] "etcd-pause-203121" [a3c22062-c777-48d2-b5d1-8d79812b722d] Running
	I1210 06:20:53.235267  259888 system_pods.go:61] "kindnet-qn46q" [f2260206-9397-4c0b-9d7d-5c59c7fde610] Running
	I1210 06:20:53.235272  259888 system_pods.go:61] "kube-apiserver-pause-203121" [d2215bf1-06dc-42bf-a4d7-ba5c7a3de06f] Running
	I1210 06:20:53.235279  259888 system_pods.go:61] "kube-controller-manager-pause-203121" [49680915-8036-4a3d-a23e-96ecf9cf91c1] Running
	I1210 06:20:53.235285  259888 system_pods.go:61] "kube-proxy-jqpjb" [3a5b610d-98d6-498d-84f7-e3edeaad1acf] Running
	I1210 06:20:53.235291  259888 system_pods.go:61] "kube-scheduler-pause-203121" [aa977e11-890e-4448-8841-294d7fcc64f1] Running
	I1210 06:20:53.235299  259888 system_pods.go:74] duration metric: took 3.75695ms to wait for pod list to return data ...
	I1210 06:20:53.235326  259888 default_sa.go:34] waiting for default service account to be created ...
	I1210 06:20:53.237582  259888 default_sa.go:45] found service account: "default"
	I1210 06:20:53.237609  259888 default_sa.go:55] duration metric: took 2.276678ms for default service account to be created ...
	I1210 06:20:53.237620  259888 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 06:20:53.240531  259888 system_pods.go:86] 7 kube-system pods found
	I1210 06:20:53.240560  259888 system_pods.go:89] "coredns-66bc5c9577-j8lrj" [3d75b8ab-fa07-448d-a04e-b1ebb0d07bff] Running
	I1210 06:20:53.240568  259888 system_pods.go:89] "etcd-pause-203121" [a3c22062-c777-48d2-b5d1-8d79812b722d] Running
	I1210 06:20:53.240575  259888 system_pods.go:89] "kindnet-qn46q" [f2260206-9397-4c0b-9d7d-5c59c7fde610] Running
	I1210 06:20:53.240581  259888 system_pods.go:89] "kube-apiserver-pause-203121" [d2215bf1-06dc-42bf-a4d7-ba5c7a3de06f] Running
	I1210 06:20:53.240588  259888 system_pods.go:89] "kube-controller-manager-pause-203121" [49680915-8036-4a3d-a23e-96ecf9cf91c1] Running
	I1210 06:20:53.240594  259888 system_pods.go:89] "kube-proxy-jqpjb" [3a5b610d-98d6-498d-84f7-e3edeaad1acf] Running
	I1210 06:20:53.240603  259888 system_pods.go:89] "kube-scheduler-pause-203121" [aa977e11-890e-4448-8841-294d7fcc64f1] Running
	I1210 06:20:53.240611  259888 system_pods.go:126] duration metric: took 2.984605ms to wait for k8s-apps to be running ...
	I1210 06:20:53.240620  259888 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 06:20:53.240672  259888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:20:53.254682  259888 system_svc.go:56] duration metric: took 14.054968ms WaitForService to wait for kubelet
	I1210 06:20:53.254706  259888 kubeadm.go:587] duration metric: took 182.359492ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 06:20:53.254723  259888 node_conditions.go:102] verifying NodePressure condition ...
	I1210 06:20:53.257599  259888 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1210 06:20:53.257639  259888 node_conditions.go:123] node cpu capacity is 8
	I1210 06:20:53.257655  259888 node_conditions.go:105] duration metric: took 2.927427ms to run NodePressure ...
	I1210 06:20:53.257666  259888 start.go:242] waiting for startup goroutines ...
	I1210 06:20:53.257673  259888 start.go:247] waiting for cluster config update ...
	I1210 06:20:53.257680  259888 start.go:256] writing updated cluster config ...
	I1210 06:20:53.257944  259888 ssh_runner.go:195] Run: rm -f paused
	I1210 06:20:53.262455  259888 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 06:20:53.263482  259888 kapi.go:59] client config for pause-203121: &rest.Config{Host:"https://192.168.103.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22089-8832/.minikube/profiles/pause-203121/client.crt", KeyFile:"/home/jenkins/minikube-integration/22089-8832/.minikube/profiles/pause-203121/client.key", CAFile:"/home/jenkins/minikube-integration/22089-8832/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string
(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 06:20:53.266453  259888 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-j8lrj" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:20:53.271145  259888 pod_ready.go:94] pod "coredns-66bc5c9577-j8lrj" is "Ready"
	I1210 06:20:53.271169  259888 pod_ready.go:86] duration metric: took 4.681171ms for pod "coredns-66bc5c9577-j8lrj" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:20:53.273432  259888 pod_ready.go:83] waiting for pod "etcd-pause-203121" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:20:53.277316  259888 pod_ready.go:94] pod "etcd-pause-203121" is "Ready"
	I1210 06:20:53.277343  259888 pod_ready.go:86] duration metric: took 3.885502ms for pod "etcd-pause-203121" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:20:53.279481  259888 pod_ready.go:83] waiting for pod "kube-apiserver-pause-203121" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:20:53.283412  259888 pod_ready.go:94] pod "kube-apiserver-pause-203121" is "Ready"
	I1210 06:20:53.283431  259888 pod_ready.go:86] duration metric: took 3.930718ms for pod "kube-apiserver-pause-203121" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:20:53.285327  259888 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-203121" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:20:53.667221  259888 pod_ready.go:94] pod "kube-controller-manager-pause-203121" is "Ready"
	I1210 06:20:53.667250  259888 pod_ready.go:86] duration metric: took 381.902327ms for pod "kube-controller-manager-pause-203121" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:20:53.867481  259888 pod_ready.go:83] waiting for pod "kube-proxy-jqpjb" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:20:54.266939  259888 pod_ready.go:94] pod "kube-proxy-jqpjb" is "Ready"
	I1210 06:20:54.266972  259888 pod_ready.go:86] duration metric: took 399.461326ms for pod "kube-proxy-jqpjb" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:20:54.467079  259888 pod_ready.go:83] waiting for pod "kube-scheduler-pause-203121" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:20:54.867410  259888 pod_ready.go:94] pod "kube-scheduler-pause-203121" is "Ready"
	I1210 06:20:54.867439  259888 pod_ready.go:86] duration metric: took 400.30366ms for pod "kube-scheduler-pause-203121" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:20:54.867453  259888 pod_ready.go:40] duration metric: took 1.604954095s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 06:20:54.915267  259888 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1210 06:20:54.917540  259888 out.go:179] * Done! kubectl is now configured to use "pause-203121" cluster and "default" namespace by default
	I1210 06:20:53.106184  252480 system_pods.go:86] 7 kube-system pods found
	I1210 06:20:53.106219  252480 system_pods.go:89] "coredns-66bc5c9577-r7p5t" [97853ca3-8982-4324-a9f2-005209f7a2dd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 06:20:53.106229  252480 system_pods.go:89] "etcd-custom-flannel-201263" [a7895f5c-56b2-4262-b678-eab987e9faa4] Running
	I1210 06:20:53.106238  252480 system_pods.go:89] "kube-apiserver-custom-flannel-201263" [99edbcf6-7869-4984-b2b8-047a3bd5b219] Running
	I1210 06:20:53.106244  252480 system_pods.go:89] "kube-controller-manager-custom-flannel-201263" [efe68542-32c5-454b-aafb-f638325cac55] Running
	I1210 06:20:53.106252  252480 system_pods.go:89] "kube-proxy-lmwlf" [f64f811e-2c3a-4ace-bf39-4dfae0bf9e48] Running
	I1210 06:20:53.106257  252480 system_pods.go:89] "kube-scheduler-custom-flannel-201263" [f2835dd2-8d60-48e3-b5d0-bd3908ce76db] Running
	I1210 06:20:53.106262  252480 system_pods.go:89] "storage-provisioner" [f53483dd-3c55-479a-b395-e6caefa2136d] Running
	I1210 06:20:53.106280  252480 retry.go:31] will retry after 3.874078497s: missing components: kube-dns
	I1210 06:20:56.985484  252480 system_pods.go:86] 7 kube-system pods found
	I1210 06:20:56.985531  252480 system_pods.go:89] "coredns-66bc5c9577-r7p5t" [97853ca3-8982-4324-a9f2-005209f7a2dd] Running
	I1210 06:20:56.985539  252480 system_pods.go:89] "etcd-custom-flannel-201263" [a7895f5c-56b2-4262-b678-eab987e9faa4] Running
	I1210 06:20:56.985545  252480 system_pods.go:89] "kube-apiserver-custom-flannel-201263" [99edbcf6-7869-4984-b2b8-047a3bd5b219] Running
	I1210 06:20:56.985551  252480 system_pods.go:89] "kube-controller-manager-custom-flannel-201263" [efe68542-32c5-454b-aafb-f638325cac55] Running
	I1210 06:20:56.985556  252480 system_pods.go:89] "kube-proxy-lmwlf" [f64f811e-2c3a-4ace-bf39-4dfae0bf9e48] Running
	I1210 06:20:56.985561  252480 system_pods.go:89] "kube-scheduler-custom-flannel-201263" [f2835dd2-8d60-48e3-b5d0-bd3908ce76db] Running
	I1210 06:20:56.985566  252480 system_pods.go:89] "storage-provisioner" [f53483dd-3c55-479a-b395-e6caefa2136d] Running
	I1210 06:20:56.985578  252480 system_pods.go:126] duration metric: took 17.66948252s to wait for k8s-apps to be running ...
	I1210 06:20:56.985594  252480 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 06:20:56.985646  252480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:20:57.001951  252480 system_svc.go:56] duration metric: took 16.352185ms WaitForService to wait for kubelet
	I1210 06:20:57.001976  252480 kubeadm.go:587] duration metric: took 21.601313725s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 06:20:57.001992  252480 node_conditions.go:102] verifying NodePressure condition ...
	I1210 06:20:57.006041  252480 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1210 06:20:57.006078  252480 node_conditions.go:123] node cpu capacity is 8
	I1210 06:20:57.006096  252480 node_conditions.go:105] duration metric: took 4.098938ms to run NodePressure ...
	I1210 06:20:57.006112  252480 start.go:242] waiting for startup goroutines ...
	I1210 06:20:57.006123  252480 start.go:247] waiting for cluster config update ...
	I1210 06:20:57.006141  252480 start.go:256] writing updated cluster config ...
	I1210 06:20:57.006502  252480 ssh_runner.go:195] Run: rm -f paused
	I1210 06:20:57.011682  252480 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 06:20:57.016060  252480 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-r7p5t" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:20:57.021339  252480 pod_ready.go:94] pod "coredns-66bc5c9577-r7p5t" is "Ready"
	I1210 06:20:57.021370  252480 pod_ready.go:86] duration metric: took 5.282074ms for pod "coredns-66bc5c9577-r7p5t" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:20:57.023785  252480 pod_ready.go:83] waiting for pod "etcd-custom-flannel-201263" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:20:57.028686  252480 pod_ready.go:94] pod "etcd-custom-flannel-201263" is "Ready"
	I1210 06:20:57.028716  252480 pod_ready.go:86] duration metric: took 4.906716ms for pod "etcd-custom-flannel-201263" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:20:57.031099  252480 pod_ready.go:83] waiting for pod "kube-apiserver-custom-flannel-201263" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:20:57.036195  252480 pod_ready.go:94] pod "kube-apiserver-custom-flannel-201263" is "Ready"
	I1210 06:20:57.036224  252480 pod_ready.go:86] duration metric: took 5.097302ms for pod "kube-apiserver-custom-flannel-201263" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:20:57.038837  252480 pod_ready.go:83] waiting for pod "kube-controller-manager-custom-flannel-201263" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:20:57.419589  252480 pod_ready.go:94] pod "kube-controller-manager-custom-flannel-201263" is "Ready"
	I1210 06:20:57.419623  252480 pod_ready.go:86] duration metric: took 380.760936ms for pod "kube-controller-manager-custom-flannel-201263" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:20:57.617780  252480 pod_ready.go:83] waiting for pod "kube-proxy-lmwlf" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:20:58.016808  252480 pod_ready.go:94] pod "kube-proxy-lmwlf" is "Ready"
	I1210 06:20:58.016830  252480 pod_ready.go:86] duration metric: took 399.023585ms for pod "kube-proxy-lmwlf" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:20:58.217535  252480 pod_ready.go:83] waiting for pod "kube-scheduler-custom-flannel-201263" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:20:58.616862  252480 pod_ready.go:94] pod "kube-scheduler-custom-flannel-201263" is "Ready"
	I1210 06:20:58.616893  252480 pod_ready.go:86] duration metric: took 399.331048ms for pod "kube-scheduler-custom-flannel-201263" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:20:58.616909  252480 pod_ready.go:40] duration metric: took 1.605189734s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 06:20:58.670044  252480 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1210 06:20:58.675011  252480 out.go:179] * Done! kubectl is now configured to use "custom-flannel-201263" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 10 06:20:51 pause-203121 crio[2184]: time="2025-12-10T06:20:51.750917463Z" level=info msg="RDT not available in the host system"
	Dec 10 06:20:51 pause-203121 crio[2184]: time="2025-12-10T06:20:51.750932771Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 10 06:20:51 pause-203121 crio[2184]: time="2025-12-10T06:20:51.752135867Z" level=info msg="Conmon does support the --sync option"
	Dec 10 06:20:51 pause-203121 crio[2184]: time="2025-12-10T06:20:51.752163832Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 10 06:20:51 pause-203121 crio[2184]: time="2025-12-10T06:20:51.752185564Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 10 06:20:51 pause-203121 crio[2184]: time="2025-12-10T06:20:51.753261482Z" level=info msg="Conmon does support the --sync option"
	Dec 10 06:20:51 pause-203121 crio[2184]: time="2025-12-10T06:20:51.753282166Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 10 06:20:51 pause-203121 crio[2184]: time="2025-12-10T06:20:51.762368412Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 10 06:20:51 pause-203121 crio[2184]: time="2025-12-10T06:20:51.762405853Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 10 06:20:51 pause-203121 crio[2184]: time="2025-12-10T06:20:51.763174331Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \
"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nr
i]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 10 06:20:51 pause-203121 crio[2184]: time="2025-12-10T06:20:51.763721293Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 10 06:20:51 pause-203121 crio[2184]: time="2025-12-10T06:20:51.76379272Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 10 06:20:51 pause-203121 crio[2184]: time="2025-12-10T06:20:51.858691835Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-j8lrj Namespace:kube-system ID:ad79311d23062295b40365d723906fd145ff91e84249da8cdc377ac2af9dc420 UID:3d75b8ab-fa07-448d-a04e-b1ebb0d07bff NetNS:/var/run/netns/d09a97d5-d346-4fc7-ac09-eb22aa04e1a0 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008a9a0}] Aliases:map[]}"
	Dec 10 06:20:51 pause-203121 crio[2184]: time="2025-12-10T06:20:51.858908203Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-j8lrj for CNI network kindnet (type=ptp)"
	Dec 10 06:20:51 pause-203121 crio[2184]: time="2025-12-10T06:20:51.859420825Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 10 06:20:51 pause-203121 crio[2184]: time="2025-12-10T06:20:51.859451548Z" level=info msg="Starting seccomp notifier watcher"
	Dec 10 06:20:51 pause-203121 crio[2184]: time="2025-12-10T06:20:51.85957935Z" level=info msg="Create NRI interface"
	Dec 10 06:20:51 pause-203121 crio[2184]: time="2025-12-10T06:20:51.859680386Z" level=info msg="built-in NRI default validator is disabled"
	Dec 10 06:20:51 pause-203121 crio[2184]: time="2025-12-10T06:20:51.859696474Z" level=info msg="runtime interface created"
	Dec 10 06:20:51 pause-203121 crio[2184]: time="2025-12-10T06:20:51.859708308Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 10 06:20:51 pause-203121 crio[2184]: time="2025-12-10T06:20:51.859713356Z" level=info msg="runtime interface starting up..."
	Dec 10 06:20:51 pause-203121 crio[2184]: time="2025-12-10T06:20:51.859718506Z" level=info msg="starting plugins..."
	Dec 10 06:20:51 pause-203121 crio[2184]: time="2025-12-10T06:20:51.859729424Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 10 06:20:51 pause-203121 crio[2184]: time="2025-12-10T06:20:51.860088803Z" level=info msg="No systemd watchdog enabled"
	Dec 10 06:20:51 pause-203121 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	cf62f9e28d443       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   13 seconds ago       Running             coredns                   0                   ad79311d23062       coredns-66bc5c9577-j8lrj               kube-system
	e00866f864193       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   55 seconds ago       Running             kube-proxy                0                   f37cdf94b850e       kube-proxy-jqpjb                       kube-system
	4324a96acbf26       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   55 seconds ago       Running             kindnet-cni               0                   f85bf0ad78155       kindnet-qn46q                          kube-system
	b0ef753ac71a3       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   About a minute ago   Running             kube-apiserver            0                   62317fbf6774f       kube-apiserver-pause-203121            kube-system
	6f2d0d213957b       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   About a minute ago   Running             etcd                      0                   37752d2cfa69a       etcd-pause-203121                      kube-system
	f1f8b92df9fd1       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   About a minute ago   Running             kube-scheduler            0                   25b3b651ebe33       kube-scheduler-pause-203121            kube-system
	9fed510c4454c       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   About a minute ago   Running             kube-controller-manager   0                   757ea1d5f8a73       kube-controller-manager-pause-203121   kube-system
	
	
	==> coredns [cf62f9e28d4439c6626f971c222c28ef61e7c99dca09cee86fc50eb02f1f11e7] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33600 - 25107 "HINFO IN 1814052844277688122.7575454376246788054. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.437869762s
	
	
	==> describe nodes <==
	Name:               pause-203121
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-203121
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=edc6abd3c0573b88c7a02dc35aa0b985627fa3e9
	                    minikube.k8s.io/name=pause-203121
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T06_20_00_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 06:19:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-203121
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 06:20:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 06:20:49 +0000   Wed, 10 Dec 2025 06:19:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 06:20:49 +0000   Wed, 10 Dec 2025 06:19:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 06:20:49 +0000   Wed, 10 Dec 2025 06:19:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Dec 2025 06:20:49 +0000   Wed, 10 Dec 2025 06:20:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    pause-203121
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 0992b7e47f4f804d2f02c3066938a460
	  System UUID:                4c472031-a92e-4218-91a0-a496dc16bf08
	  Boot ID:                    cce7104c-1270-4b6b-af66-b04ce0de633c
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://Unknown
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-j8lrj                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     56s
	  kube-system                 etcd-pause-203121                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         61s
	  kube-system                 kindnet-qn46q                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      56s
	  kube-system                 kube-apiserver-pause-203121             250m (3%)     0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-controller-manager-pause-203121    200m (2%)     0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-proxy-jqpjb                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kube-system                 kube-scheduler-pause-203121             100m (1%)     0 (0%)      0 (0%)           0 (0%)         61s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 54s   kube-proxy       
	  Normal  Starting                 61s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  61s   kubelet          Node pause-203121 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    61s   kubelet          Node pause-203121 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s   kubelet          Node pause-203121 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           57s   node-controller  Node pause-203121 event: Registered Node pause-203121 in Controller
	  Normal  NodeReady                15s   kubelet          Node pause-203121 status is now: NodeReady
	
	
	==> dmesg <==
	[  +4.744944] kauditd_printk_skb: 47 callbacks suppressed
	[Dec10 05:46] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 7a e7 3d 79 07 a4 2a ee 7c da f3 33 08 00
	[  +1.032224] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 7a e7 3d 79 07 a4 2a ee 7c da f3 33 08 00
	[  +1.023853] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 7a e7 3d 79 07 a4 2a ee 7c da f3 33 08 00
	[  +1.023939] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 7a e7 3d 79 07 a4 2a ee 7c da f3 33 08 00
	[  +1.023886] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000028] ll header: 00000000: 7a e7 3d 79 07 a4 2a ee 7c da f3 33 08 00
	[  +1.023872] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 7a e7 3d 79 07 a4 2a ee 7c da f3 33 08 00
	[  +2.047757] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 7a e7 3d 79 07 a4 2a ee 7c da f3 33 08 00
	[  +4.031567] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 7a e7 3d 79 07 a4 2a ee 7c da f3 33 08 00
	[  +8.191127] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 7a e7 3d 79 07 a4 2a ee 7c da f3 33 08 00
	[ +16.382234] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 7a e7 3d 79 07 a4 2a ee 7c da f3 33 08 00
	[Dec10 05:47] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 7a e7 3d 79 07 a4 2a ee 7c da f3 33 08 00
	[Dec10 06:20] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 67 21 4d 5d 12 08 06
	
	
	==> etcd [6f2d0d213957beac3c690eeacb3151c1192c461d8284e6a53b4cfecdd4a17add] <==
	{"level":"warn","ts":"2025-12-10T06:19:55.559529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:19:55.570879Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:19:55.583403Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:19:55.598949Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:19:55.607356Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:19:55.625203Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:19:55.630222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:19:55.639382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:19:55.650464Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:19:55.724396Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:20:01.577929Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"178.565459ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/node-controller\" limit:1 ","response":"range_response_count:1 size:195"}
	{"level":"info","ts":"2025-12-10T06:20:01.578029Z","caller":"traceutil/trace.go:172","msg":"trace[2068022794] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/node-controller; range_end:; response_count:1; response_revision:285; }","duration":"178.674806ms","start":"2025-12-10T06:20:01.399337Z","end":"2025-12-10T06:20:01.578012Z","steps":["trace[2068022794] 'range keys from in-memory index tree'  (duration: 178.41605ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T06:20:02.125934Z","caller":"traceutil/trace.go:172","msg":"trace[1392134239] transaction","detail":"{read_only:false; response_revision:289; number_of_response:1; }","duration":"126.489303ms","start":"2025-12-10T06:20:01.999419Z","end":"2025-12-10T06:20:02.125908Z","steps":["trace[1392134239] 'process raft request'  (duration: 126.388642ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-10T06:20:05.215426Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"113.903292ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873790581099013742 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-proxy-jqpjb.187fc64b24e4eaa8\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-proxy-jqpjb.187fc64b24e4eaa8\" value_size:633 lease:4650418544244237643 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-12-10T06:20:05.215610Z","caller":"traceutil/trace.go:172","msg":"trace[422589769] transaction","detail":"{read_only:false; response_revision:354; number_of_response:1; }","duration":"152.738509ms","start":"2025-12-10T06:20:05.062862Z","end":"2025-12-10T06:20:05.215600Z","steps":["trace[422589769] 'process raft request'  (duration: 152.670252ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T06:20:05.215647Z","caller":"traceutil/trace.go:172","msg":"trace[772312394] transaction","detail":"{read_only:false; response_revision:353; number_of_response:1; }","duration":"171.398733ms","start":"2025-12-10T06:20:05.044219Z","end":"2025-12-10T06:20:05.215617Z","steps":["trace[772312394] 'process raft request'  (duration: 57.023451ms)","trace[772312394] 'compare'  (duration: 113.801328ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-10T06:20:05.408852Z","caller":"traceutil/trace.go:172","msg":"trace[1656008090] linearizableReadLoop","detail":"{readStateIndex:363; appliedIndex:363; }","duration":"180.134723ms","start":"2025-12-10T06:20:05.228693Z","end":"2025-12-10T06:20:05.408828Z","steps":["trace[1656008090] 'read index received'  (duration: 180.105942ms)","trace[1656008090] 'applied index is now lower than readState.Index'  (duration: 27.946µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-10T06:20:05.430934Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"202.222218ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" limit:1 ","response":"range_response_count:1 size:4338"}
	{"level":"info","ts":"2025-12-10T06:20:05.431081Z","caller":"traceutil/trace.go:172","msg":"trace[1692918075] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:354; }","duration":"202.375746ms","start":"2025-12-10T06:20:05.228690Z","end":"2025-12-10T06:20:05.431066Z","steps":["trace[1692918075] 'agreement among raft nodes before linearized reading'  (duration: 180.227597ms)","trace[1692918075] 'range keys from in-memory index tree'  (duration: 21.907105ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-10T06:20:05.431112Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"157.291779ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-jqpjb\" limit:1 ","response":"range_response_count:1 size:5039"}
	{"level":"warn","ts":"2025-12-10T06:20:05.431117Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"202.40297ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-203121\" limit:1 ","response":"range_response_count:1 size:5560"}
	{"level":"info","ts":"2025-12-10T06:20:05.431148Z","caller":"traceutil/trace.go:172","msg":"trace[53438135] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-jqpjb; range_end:; response_count:1; response_revision:355; }","duration":"157.335995ms","start":"2025-12-10T06:20:05.273803Z","end":"2025-12-10T06:20:05.431139Z","steps":["trace[53438135] 'agreement among raft nodes before linearized reading'  (duration: 157.204511ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T06:20:05.431160Z","caller":"traceutil/trace.go:172","msg":"trace[1883692611] range","detail":"{range_begin:/registry/minions/pause-203121; range_end:; response_count:1; response_revision:355; }","duration":"202.445155ms","start":"2025-12-10T06:20:05.228698Z","end":"2025-12-10T06:20:05.431143Z","steps":["trace[1883692611] 'agreement among raft nodes before linearized reading'  (duration: 202.313271ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T06:20:05.431015Z","caller":"traceutil/trace.go:172","msg":"trace[404264850] transaction","detail":"{read_only:false; response_revision:355; number_of_response:1; }","duration":"212.399165ms","start":"2025-12-10T06:20:05.218590Z","end":"2025-12-10T06:20:05.430990Z","steps":["trace[404264850] 'process raft request'  (duration: 190.270673ms)","trace[404264850] 'compare'  (duration: 21.985742ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-10T06:20:49.959662Z","caller":"traceutil/trace.go:172","msg":"trace[374987701] transaction","detail":"{read_only:false; response_revision:412; number_of_response:1; }","duration":"104.244124ms","start":"2025-12-10T06:20:49.855399Z","end":"2025-12-10T06:20:49.959643Z","steps":["trace[374987701] 'process raft request'  (duration: 104.114555ms)"],"step_count":1}
	
	
	==> kernel <==
	 06:21:00 up  1:03,  0 user,  load average: 4.87, 3.31, 2.01
	Linux pause-203121 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4324a96acbf26610fa24d25a6b10deeebb9cddb7fb94f5dfde55488050951f4c] <==
	I1210 06:20:05.144605       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1210 06:20:05.144900       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1210 06:20:05.145057       1 main.go:148] setting mtu 1500 for CNI 
	I1210 06:20:05.145070       1 main.go:178] kindnetd IP family: "ipv4"
	I1210 06:20:05.145110       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-10T06:20:05Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1210 06:20:05.346449       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1210 06:20:05.346927       1 controller.go:381] "Waiting for informer caches to sync"
	I1210 06:20:05.346940       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1210 06:20:05.347207       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1210 06:20:35.347256       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1210 06:20:35.347558       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1210 06:20:35.347562       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1210 06:20:35.347719       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1210 06:20:36.947576       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1210 06:20:36.947612       1 metrics.go:72] Registering metrics
	I1210 06:20:36.947721       1 controller.go:711] "Syncing nftables rules"
	I1210 06:20:45.353394       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1210 06:20:45.353451       1 main.go:301] handling current node
	I1210 06:20:55.351599       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1210 06:20:55.351663       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b0ef753ac71a3588946b11e2247d60114c2ada8b6472fa9fe506e1f8d9b2576a] <==
	E1210 06:19:56.412697       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1210 06:19:56.464350       1 controller.go:667] quota admission added evaluator for: namespaces
	I1210 06:19:56.475353       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 06:19:56.481917       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	E1210 06:19:56.485293       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1210 06:19:56.503913       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 06:19:56.504006       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1210 06:19:56.599299       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1210 06:19:57.264431       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1210 06:19:57.269592       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1210 06:19:57.269615       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1210 06:19:58.015357       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1210 06:19:58.068874       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1210 06:19:58.176405       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1210 06:19:58.184785       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1210 06:19:58.186235       1 controller.go:667] quota admission added evaluator for: endpoints
	I1210 06:19:58.193059       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1210 06:19:58.401686       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1210 06:19:59.370104       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1210 06:19:59.383527       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1210 06:19:59.395242       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1210 06:20:04.058147       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1210 06:20:04.255443       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1210 06:20:04.307815       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 06:20:04.314664       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [9fed510c4454cb11f751b00c6dc02a48e1bb122a804caf714f4cbeae72fd6a05] <==
	I1210 06:20:03.459729       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 06:20:03.462933       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1210 06:20:03.471260       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1210 06:20:03.477601       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1210 06:20:03.480014       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1210 06:20:03.480044       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1210 06:20:03.480174       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1210 06:20:03.480268       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-203121"
	I1210 06:20:03.480375       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1210 06:20:03.481483       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1210 06:20:03.493304       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1210 06:20:03.494462       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1210 06:20:03.501940       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1210 06:20:03.506426       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1210 06:20:03.506449       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1210 06:20:03.508078       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1210 06:20:03.510341       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1210 06:20:03.510356       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1210 06:20:03.510362       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1210 06:20:03.512535       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 06:20:03.512625       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1210 06:20:03.517948       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1210 06:20:03.524400       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1210 06:20:03.559776       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-203121" podCIDRs=["10.244.0.0/24"]
	I1210 06:20:48.487961       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [e00866f864193cb02d7fa4e6e4fdbc6ad01fdffb3408406ad2b0a2f2ca7546ab] <==
	I1210 06:20:05.046858       1 server_linux.go:53] "Using iptables proxy"
	I1210 06:20:05.111328       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1210 06:20:05.212186       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1210 06:20:05.212235       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1210 06:20:05.212307       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 06:20:05.301257       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1210 06:20:05.301315       1 server_linux.go:132] "Using iptables Proxier"
	I1210 06:20:05.307971       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 06:20:05.308343       1 server.go:527] "Version info" version="v1.34.2"
	I1210 06:20:05.308365       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 06:20:05.309776       1 config.go:200] "Starting service config controller"
	I1210 06:20:05.309802       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 06:20:05.309814       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 06:20:05.309833       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 06:20:05.309835       1 config.go:106] "Starting endpoint slice config controller"
	I1210 06:20:05.309849       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 06:20:05.309865       1 config.go:309] "Starting node config controller"
	I1210 06:20:05.309871       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 06:20:05.309879       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 06:20:05.410650       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1210 06:20:05.411085       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1210 06:20:05.411177       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [f1f8b92df9fd1da6da75299621207a74d1d2035f97ce2dd8c961fcf715a4e7ec] <==
	E1210 06:19:56.506603       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1210 06:19:56.506778       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1210 06:19:56.506974       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1210 06:19:56.507104       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1210 06:19:56.507175       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1210 06:19:56.507239       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1210 06:19:56.507293       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1210 06:19:56.507349       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1210 06:19:56.507498       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1210 06:19:56.507653       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1210 06:19:56.507838       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1210 06:19:56.507928       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1210 06:19:56.512317       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1210 06:19:57.327441       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1210 06:19:57.352939       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1210 06:19:57.499636       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1210 06:19:57.551172       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1210 06:19:57.580220       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1210 06:19:57.608153       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1210 06:19:57.625564       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1210 06:19:57.633363       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1210 06:19:57.726117       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1210 06:19:57.730464       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1210 06:19:57.937350       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1210 06:20:00.290030       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 10 06:20:04 pause-203121 kubelet[1328]: I1210 06:20:04.337347    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pnfb\" (UniqueName: \"kubernetes.io/projected/f2260206-9397-4c0b-9d7d-5c59c7fde610-kube-api-access-6pnfb\") pod \"kindnet-qn46q\" (UID: \"f2260206-9397-4c0b-9d7d-5c59c7fde610\") " pod="kube-system/kindnet-qn46q"
	Dec 10 06:20:04 pause-203121 kubelet[1328]: I1210 06:20:04.337404    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3a5b610d-98d6-498d-84f7-e3edeaad1acf-xtables-lock\") pod \"kube-proxy-jqpjb\" (UID: \"3a5b610d-98d6-498d-84f7-e3edeaad1acf\") " pod="kube-system/kube-proxy-jqpjb"
	Dec 10 06:20:04 pause-203121 kubelet[1328]: I1210 06:20:04.337432    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3a5b610d-98d6-498d-84f7-e3edeaad1acf-lib-modules\") pod \"kube-proxy-jqpjb\" (UID: \"3a5b610d-98d6-498d-84f7-e3edeaad1acf\") " pod="kube-system/kube-proxy-jqpjb"
	Dec 10 06:20:04 pause-203121 kubelet[1328]: I1210 06:20:04.337494    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f2260206-9397-4c0b-9d7d-5c59c7fde610-lib-modules\") pod \"kindnet-qn46q\" (UID: \"f2260206-9397-4c0b-9d7d-5c59c7fde610\") " pod="kube-system/kindnet-qn46q"
	Dec 10 06:20:04 pause-203121 kubelet[1328]: I1210 06:20:04.337519    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hszqv\" (UniqueName: \"kubernetes.io/projected/3a5b610d-98d6-498d-84f7-e3edeaad1acf-kube-api-access-hszqv\") pod \"kube-proxy-jqpjb\" (UID: \"3a5b610d-98d6-498d-84f7-e3edeaad1acf\") " pod="kube-system/kube-proxy-jqpjb"
	Dec 10 06:20:04 pause-203121 kubelet[1328]: I1210 06:20:04.337543    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/f2260206-9397-4c0b-9d7d-5c59c7fde610-cni-cfg\") pod \"kindnet-qn46q\" (UID: \"f2260206-9397-4c0b-9d7d-5c59c7fde610\") " pod="kube-system/kindnet-qn46q"
	Dec 10 06:20:04 pause-203121 kubelet[1328]: I1210 06:20:04.337612    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f2260206-9397-4c0b-9d7d-5c59c7fde610-xtables-lock\") pod \"kindnet-qn46q\" (UID: \"f2260206-9397-4c0b-9d7d-5c59c7fde610\") " pod="kube-system/kindnet-qn46q"
	Dec 10 06:20:04 pause-203121 kubelet[1328]: I1210 06:20:04.337648    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3a5b610d-98d6-498d-84f7-e3edeaad1acf-kube-proxy\") pod \"kube-proxy-jqpjb\" (UID: \"3a5b610d-98d6-498d-84f7-e3edeaad1acf\") " pod="kube-system/kube-proxy-jqpjb"
	Dec 10 06:20:05 pause-203121 kubelet[1328]: I1210 06:20:05.520611    1328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jqpjb" podStartSLOduration=1.520586828 podStartE2EDuration="1.520586828s" podCreationTimestamp="2025-12-10 06:20:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 06:20:05.519951265 +0000 UTC m=+6.399575737" watchObservedRunningTime="2025-12-10 06:20:05.520586828 +0000 UTC m=+6.400211296"
	Dec 10 06:20:07 pause-203121 kubelet[1328]: I1210 06:20:07.171426    1328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-qn46q" podStartSLOduration=3.171399883 podStartE2EDuration="3.171399883s" podCreationTimestamp="2025-12-10 06:20:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 06:20:05.562832674 +0000 UTC m=+6.442457145" watchObservedRunningTime="2025-12-10 06:20:07.171399883 +0000 UTC m=+8.051024351"
	Dec 10 06:20:45 pause-203121 kubelet[1328]: I1210 06:20:45.829970    1328 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 10 06:20:45 pause-203121 kubelet[1328]: I1210 06:20:45.937522    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d75b8ab-fa07-448d-a04e-b1ebb0d07bff-config-volume\") pod \"coredns-66bc5c9577-j8lrj\" (UID: \"3d75b8ab-fa07-448d-a04e-b1ebb0d07bff\") " pod="kube-system/coredns-66bc5c9577-j8lrj"
	Dec 10 06:20:45 pause-203121 kubelet[1328]: I1210 06:20:45.937597    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7h4s\" (UniqueName: \"kubernetes.io/projected/3d75b8ab-fa07-448d-a04e-b1ebb0d07bff-kube-api-access-l7h4s\") pod \"coredns-66bc5c9577-j8lrj\" (UID: \"3d75b8ab-fa07-448d-a04e-b1ebb0d07bff\") " pod="kube-system/coredns-66bc5c9577-j8lrj"
	Dec 10 06:20:46 pause-203121 kubelet[1328]: I1210 06:20:46.410815    1328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-j8lrj" podStartSLOduration=42.410787257 podStartE2EDuration="42.410787257s" podCreationTimestamp="2025-12-10 06:20:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 06:20:46.392300366 +0000 UTC m=+47.271924847" watchObservedRunningTime="2025-12-10 06:20:46.410787257 +0000 UTC m=+47.290411727"
	Dec 10 06:20:49 pause-203121 kubelet[1328]: W1210 06:20:49.850195    1328 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 10 06:20:49 pause-203121 kubelet[1328]: E1210 06:20:49.850360    1328 log.go:32] "Version from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 10 06:20:49 pause-203121 kubelet[1328]: W1210 06:20:49.951423    1328 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 10 06:20:50 pause-203121 kubelet[1328]: W1210 06:20:50.122006    1328 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 10 06:20:50 pause-203121 kubelet[1328]: E1210 06:20:50.384362    1328 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Dec 10 06:20:50 pause-203121 kubelet[1328]: E1210 06:20:50.384449    1328 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 10 06:20:50 pause-203121 kubelet[1328]: E1210 06:20:50.384509    1328 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 10 06:20:55 pause-203121 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 10 06:20:55 pause-203121 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 10 06:20:55 pause-203121 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:20:55 pause-203121 systemd[1]: kubelet.service: Consumed 2.425s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-203121 -n pause-203121
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-203121 -n pause-203121: exit status 2 (393.640901ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-203121 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (6.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-424086 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-424086 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (451.666467ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:23:35Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-424086 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-424086 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-424086 describe deploy/metrics-server -n kube-system: exit status 1 (78.951165ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-424086 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-424086
helpers_test.go:244: (dbg) docker inspect old-k8s-version-424086:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d21017d71f3af8d7abafb6e9f6402086b5bc7efdc67803532796985e567044fe",
	        "Created": "2025-12-10T06:22:41.51619025Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 295398,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T06:22:41.557527243Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9dfcc37acf4d8ed51daae49d651516447e95ced4bb0b0783e8c53cb79a74f008",
	        "ResolvConfPath": "/var/lib/docker/containers/d21017d71f3af8d7abafb6e9f6402086b5bc7efdc67803532796985e567044fe/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d21017d71f3af8d7abafb6e9f6402086b5bc7efdc67803532796985e567044fe/hostname",
	        "HostsPath": "/var/lib/docker/containers/d21017d71f3af8d7abafb6e9f6402086b5bc7efdc67803532796985e567044fe/hosts",
	        "LogPath": "/var/lib/docker/containers/d21017d71f3af8d7abafb6e9f6402086b5bc7efdc67803532796985e567044fe/d21017d71f3af8d7abafb6e9f6402086b5bc7efdc67803532796985e567044fe-json.log",
	        "Name": "/old-k8s-version-424086",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-424086:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-424086",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d21017d71f3af8d7abafb6e9f6402086b5bc7efdc67803532796985e567044fe",
	                "LowerDir": "/var/lib/docker/overlay2/6ed813bbe06aa9d52f4b2ba3e4f390060eccae3897f3c072f46a421de8d0988d-init/diff:/var/lib/docker/overlay2/5745aee6e8b05b3a4cc4ad6aee891df9d6438d830895f70bd2a764a976802708/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6ed813bbe06aa9d52f4b2ba3e4f390060eccae3897f3c072f46a421de8d0988d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6ed813bbe06aa9d52f4b2ba3e4f390060eccae3897f3c072f46a421de8d0988d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6ed813bbe06aa9d52f4b2ba3e4f390060eccae3897f3c072f46a421de8d0988d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-424086",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-424086/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-424086",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-424086",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-424086",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "82136eb1850b937159340d8103779a8acc6c030ffb69a19a2fe697fd0c05b967",
	            "SandboxKey": "/var/run/docker/netns/82136eb1850b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-424086": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "027b45f7486274e056d3788623afd124007b73108c46de2edc58de9683929366",
	                    "EndpointID": "dfad802d6ce19cbde6b4edab739cfba71e54c2e3fa11b2366a07b0d57d2d4e5e",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "1e:1f:ef:c5:01:a4",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-424086",
	                        "d21017d71f3a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-424086 -n old-k8s-version-424086
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-424086 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-424086 logs -n 25: (1.364975006s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-201263 sudo cat /etc/kubernetes/kubelet.conf                                                                                                                   │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ start   │ -p embed-certs-133470 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                   │ embed-certs-133470           │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ ssh     │ -p bridge-201263 sudo cat /var/lib/kubelet/config.yaml                                                                                                                   │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ ssh     │ -p bridge-201263 sudo systemctl status docker --all --full --no-pager                                                                                                    │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ ssh     │ -p bridge-201263 sudo systemctl cat docker --no-pager                                                                                                                    │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ ssh     │ -p bridge-201263 sudo cat /etc/docker/daemon.json                                                                                                                        │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ ssh     │ -p bridge-201263 sudo docker system info                                                                                                                                 │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ ssh     │ -p bridge-201263 sudo systemctl status cri-docker --all --full --no-pager                                                                                                │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ ssh     │ -p bridge-201263 sudo systemctl cat cri-docker --no-pager                                                                                                                │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ ssh     │ -p bridge-201263 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                           │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ ssh     │ -p bridge-201263 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                     │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ ssh     │ -p bridge-201263 sudo cri-dockerd --version                                                                                                                              │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ ssh     │ -p bridge-201263 sudo systemctl status containerd --all --full --no-pager                                                                                                │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ ssh     │ -p bridge-201263 sudo systemctl cat containerd --no-pager                                                                                                                │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ ssh     │ -p bridge-201263 sudo cat /lib/systemd/system/containerd.service                                                                                                         │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ ssh     │ -p bridge-201263 sudo cat /etc/containerd/config.toml                                                                                                                    │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ ssh     │ -p bridge-201263 sudo containerd config dump                                                                                                                             │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ ssh     │ -p bridge-201263 sudo systemctl status crio --all --full --no-pager                                                                                                      │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ ssh     │ -p bridge-201263 sudo systemctl cat crio --no-pager                                                                                                                      │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ ssh     │ -p bridge-201263 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                            │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ ssh     │ -p bridge-201263 sudo crio config                                                                                                                                        │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ delete  │ -p bridge-201263                                                                                                                                                         │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ delete  │ -p disable-driver-mounts-998062                                                                                                                                          │ disable-driver-mounts-998062 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ start   │ -p default-k8s-diff-port-643991 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2 │ default-k8s-diff-port-643991 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-424086 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                             │ old-k8s-version-424086       │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:23:15
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:23:15.956251  314350 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:23:15.956383  314350 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:23:15.956394  314350 out.go:374] Setting ErrFile to fd 2...
	I1210 06:23:15.956400  314350 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:23:15.956756  314350 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8832/.minikube/bin
	I1210 06:23:15.957417  314350 out.go:368] Setting JSON to false
	I1210 06:23:15.958886  314350 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3947,"bootTime":1765343849,"procs":315,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 06:23:15.958953  314350 start.go:143] virtualization: kvm guest
	I1210 06:23:15.961585  314350 out.go:179] * [default-k8s-diff-port-643991] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 06:23:15.962993  314350 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 06:23:15.962999  314350 notify.go:221] Checking for updates...
	I1210 06:23:15.965732  314350 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:23:15.967959  314350 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-8832/kubeconfig
	I1210 06:23:15.969747  314350 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-8832/.minikube
	I1210 06:23:15.971140  314350 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 06:23:15.976242  314350 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:23:15.978673  314350 config.go:182] Loaded profile config "embed-certs-133470": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 06:23:15.978817  314350 config.go:182] Loaded profile config "no-preload-713838": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 06:23:15.978932  314350 config.go:182] Loaded profile config "old-k8s-version-424086": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1210 06:23:15.979045  314350 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:23:16.007256  314350 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 06:23:16.007387  314350 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:23:16.072592  314350 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:89 SystemTime:2025-12-10 06:23:16.060841065 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:23:16.072786  314350 docker.go:319] overlay module found
	I1210 06:23:16.076105  314350 out.go:179] * Using the docker driver based on user configuration
	I1210 06:23:16.077550  314350 start.go:309] selected driver: docker
	I1210 06:23:16.077571  314350 start.go:927] validating driver "docker" against <nil>
	I1210 06:23:16.077589  314350 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:23:16.078423  314350 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:23:16.147396  314350 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:89 SystemTime:2025-12-10 06:23:16.136268489 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:23:16.147664  314350 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1210 06:23:16.147929  314350 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 06:23:16.149887  314350 out.go:179] * Using Docker driver with root privileges
	I1210 06:23:16.151111  314350 cni.go:84] Creating CNI manager for ""
	I1210 06:23:16.151180  314350 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:23:16.151187  314350 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1210 06:23:16.151262  314350 start.go:353] cluster config:
	{Name:default-k8s-diff-port-643991 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-643991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:23:16.152796  314350 out.go:179] * Starting "default-k8s-diff-port-643991" primary control-plane node in "default-k8s-diff-port-643991" cluster
	I1210 06:23:16.154143  314350 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 06:23:16.155492  314350 out.go:179] * Pulling base image v0.0.48-1765319469-22089 ...
	I1210 06:23:16.156796  314350 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 06:23:16.156837  314350 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-8832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1210 06:23:16.156845  314350 cache.go:65] Caching tarball of preloaded images
	I1210 06:23:16.156912  314350 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon
	I1210 06:23:16.156935  314350 preload.go:238] Found /home/jenkins/minikube-integration/22089-8832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 06:23:16.156945  314350 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1210 06:23:16.157081  314350 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/default-k8s-diff-port-643991/config.json ...
	I1210 06:23:16.157105  314350 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/default-k8s-diff-port-643991/config.json: {Name:mk7ecd50983ce9a53df7d8fb65a85d6414cd2479 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:23:16.184953  314350 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon, skipping pull
	I1210 06:23:16.184993  314350 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca exists in daemon, skipping load
	I1210 06:23:16.185010  314350 cache.go:243] Successfully downloaded all kic artifacts
	I1210 06:23:16.185047  314350 start.go:360] acquireMachinesLock for default-k8s-diff-port-643991: {Name:mk370efe05d640ea21e9150c952c3b99e34124d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:23:16.185153  314350 start.go:364] duration metric: took 85.314µs to acquireMachinesLock for "default-k8s-diff-port-643991"
	I1210 06:23:16.185185  314350 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-643991 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-643991 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 06:23:16.185276  314350 start.go:125] createHost starting for "" (driver="docker")
	I1210 06:23:14.617973  309386 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-133470
	
	I1210 06:23:14.618000  309386 ubuntu.go:182] provisioning hostname "embed-certs-133470"
	I1210 06:23:14.618061  309386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-133470
	I1210 06:23:14.639893  309386 main.go:143] libmachine: Using SSH client type: native
	I1210 06:23:14.640138  309386 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33104 <nil> <nil>}
	I1210 06:23:14.640166  309386 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-133470 && echo "embed-certs-133470" | sudo tee /etc/hostname
	I1210 06:23:14.868874  309386 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-133470
	
	I1210 06:23:14.868999  309386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-133470
	I1210 06:23:14.888852  309386 main.go:143] libmachine: Using SSH client type: native
	I1210 06:23:14.889097  309386 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33104 <nil> <nil>}
	I1210 06:23:14.889113  309386 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-133470' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-133470/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-133470' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 06:23:15.022796  309386 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:23:15.022828  309386 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22089-8832/.minikube CaCertPath:/home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22089-8832/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22089-8832/.minikube}
	I1210 06:23:15.022862  309386 ubuntu.go:190] setting up certificates
	I1210 06:23:15.022874  309386 provision.go:84] configureAuth start
	I1210 06:23:15.022929  309386 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-133470
	I1210 06:23:15.045660  309386 provision.go:143] copyHostCerts
	I1210 06:23:15.045760  309386 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-8832/.minikube/ca.pem, removing ...
	I1210 06:23:15.045777  309386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-8832/.minikube/ca.pem
	I1210 06:23:15.045919  309386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22089-8832/.minikube/ca.pem (1078 bytes)
	I1210 06:23:15.046131  309386 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-8832/.minikube/cert.pem, removing ...
	I1210 06:23:15.046152  309386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-8832/.minikube/cert.pem
	I1210 06:23:15.046198  309386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22089-8832/.minikube/cert.pem (1123 bytes)
	I1210 06:23:15.046313  309386 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-8832/.minikube/key.pem, removing ...
	I1210 06:23:15.046324  309386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-8832/.minikube/key.pem
	I1210 06:23:15.046363  309386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22089-8832/.minikube/key.pem (1675 bytes)
	I1210 06:23:15.046766  309386 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22089-8832/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca-key.pem org=jenkins.embed-certs-133470 san=[127.0.0.1 192.168.94.2 embed-certs-133470 localhost minikube]
	I1210 06:23:15.102384  309386 provision.go:177] copyRemoteCerts
	I1210 06:23:15.102458  309386 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 06:23:15.102519  309386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-133470
	I1210 06:23:15.125826  309386 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33104 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/embed-certs-133470/id_rsa Username:docker}
	I1210 06:23:15.229082  309386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1210 06:23:15.255324  309386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 06:23:15.280072  309386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 06:23:15.304405  309386 provision.go:87] duration metric: took 281.503766ms to configureAuth
	I1210 06:23:15.304452  309386 ubuntu.go:206] setting minikube options for container-runtime
	I1210 06:23:15.304662  309386 config.go:182] Loaded profile config "embed-certs-133470": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 06:23:15.304813  309386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-133470
	I1210 06:23:15.329020  309386 main.go:143] libmachine: Using SSH client type: native
	I1210 06:23:15.329255  309386 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33104 <nil> <nil>}
	I1210 06:23:15.329271  309386 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 06:23:15.629095  309386 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 06:23:15.629121  309386 machine.go:97] duration metric: took 4.186057165s to provisionDockerMachine
	I1210 06:23:15.629133  309386 client.go:176] duration metric: took 10.982765903s to LocalClient.Create
	I1210 06:23:15.629165  309386 start.go:167] duration metric: took 10.982841613s to libmachine.API.Create "embed-certs-133470"
	I1210 06:23:15.629175  309386 start.go:293] postStartSetup for "embed-certs-133470" (driver="docker")
	I1210 06:23:15.629187  309386 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 06:23:15.629266  309386 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 06:23:15.629318  309386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-133470
	I1210 06:23:15.652691  309386 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33104 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/embed-certs-133470/id_rsa Username:docker}
	I1210 06:23:15.759308  309386 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 06:23:15.763533  309386 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 06:23:15.763569  309386 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 06:23:15.763583  309386 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-8832/.minikube/addons for local assets ...
	I1210 06:23:15.763641  309386 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-8832/.minikube/files for local assets ...
	I1210 06:23:15.763762  309386 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-8832/.minikube/files/etc/ssl/certs/123742.pem -> 123742.pem in /etc/ssl/certs
	I1210 06:23:15.763891  309386 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 06:23:15.772621  309386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/files/etc/ssl/certs/123742.pem --> /etc/ssl/certs/123742.pem (1708 bytes)
	I1210 06:23:15.798345  309386 start.go:296] duration metric: took 169.155751ms for postStartSetup
	I1210 06:23:15.798790  309386 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-133470
	I1210 06:23:15.821107  309386 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/embed-certs-133470/config.json ...
	I1210 06:23:15.821434  309386 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:23:15.821513  309386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-133470
	I1210 06:23:15.842492  309386 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33104 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/embed-certs-133470/id_rsa Username:docker}
	I1210 06:23:15.940213  309386 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 06:23:15.946681  309386 start.go:128] duration metric: took 11.303602246s to createHost
	I1210 06:23:15.946710  309386 start.go:83] releasing machines lock for "embed-certs-133470", held for 11.303731731s
	I1210 06:23:15.946794  309386 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-133470
	I1210 06:23:15.971624  309386 ssh_runner.go:195] Run: cat /version.json
	I1210 06:23:15.971680  309386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-133470
	I1210 06:23:15.971717  309386 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 06:23:15.971801  309386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-133470
	I1210 06:23:15.993666  309386 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33104 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/embed-certs-133470/id_rsa Username:docker}
	I1210 06:23:15.994830  309386 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33104 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/embed-certs-133470/id_rsa Username:docker}
	I1210 06:23:16.092445  309386 ssh_runner.go:195] Run: systemctl --version
	I1210 06:23:16.164195  309386 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 06:23:16.215573  309386 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 06:23:16.222912  309386 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 06:23:16.223067  309386 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 06:23:16.264608  309386 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 06:23:16.264650  309386 start.go:496] detecting cgroup driver to use...
	I1210 06:23:16.264688  309386 detect.go:190] detected "systemd" cgroup driver on host os
	I1210 06:23:16.264762  309386 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 06:23:16.290370  309386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 06:23:16.309525  309386 docker.go:218] disabling cri-docker service (if available) ...
	I1210 06:23:16.309585  309386 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 06:23:16.333026  309386 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 06:23:16.359621  309386 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 06:23:16.463808  309386 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 06:23:16.642188  309386 docker.go:234] disabling docker service ...
	I1210 06:23:16.642260  309386 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 06:23:16.668023  309386 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 06:23:16.684290  309386 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 06:23:16.875882  309386 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 06:23:16.985385  309386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 06:23:17.000859  309386 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:23:17.020840  309386 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 06:23:17.020902  309386 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:23:17.049410  309386 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1210 06:23:17.049492  309386 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:23:17.066147  309386 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:23:17.081013  309386 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:23:17.094120  309386 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 06:23:17.104501  309386 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:23:17.115319  309386 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:23:17.133862  309386 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:23:17.146891  309386 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 06:23:17.157142  309386 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 06:23:17.166256  309386 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:23:17.292726  309386 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 06:23:17.768420  309386 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 06:23:17.768565  309386 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 06:23:17.774350  309386 start.go:564] Will wait 60s for crictl version
	I1210 06:23:17.774434  309386 ssh_runner.go:195] Run: which crictl
	I1210 06:23:17.778769  309386 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 06:23:17.809022  309386 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 06:23:17.809118  309386 ssh_runner.go:195] Run: crio --version
	I1210 06:23:17.840483  309386 ssh_runner.go:195] Run: crio --version
	I1210 06:23:17.880588  309386 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1210 06:23:16.020604  303393 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22089-8832/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1210 06:23:16.020641  303393 cache_images.go:125] Successfully loaded all cached images
	I1210 06:23:16.020646  303393 cache_images.go:94] duration metric: took 12.79889439s to LoadCachedImages
	I1210 06:23:16.020658  303393 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.35.0-beta.0 crio true true} ...
	I1210 06:23:16.020756  303393 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-713838 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-713838 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 06:23:16.020835  303393 ssh_runner.go:195] Run: crio config
	I1210 06:23:16.079933  303393 cni.go:84] Creating CNI manager for ""
	I1210 06:23:16.079957  303393 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:23:16.079972  303393 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 06:23:16.080008  303393 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-713838 NodeName:no-preload-713838 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 06:23:16.080198  303393 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-713838"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 06:23:16.080271  303393 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1210 06:23:16.089627  303393 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1210 06:23:16.089688  303393 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1210 06:23:16.100446  303393 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/22089-8832/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubelet
	I1210 06:23:16.101172  303393 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl.sha256
	I1210 06:23:16.101267  303393 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1210 06:23:16.101341  303393 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/22089-8832/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubeadm
	I1210 06:23:16.107096  303393 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1210 06:23:16.107136  303393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (58589368 bytes)
	I1210 06:23:16.951014  303393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:23:16.965553  303393 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1210 06:23:16.969949  303393 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1210 06:23:16.969989  303393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (58106148 bytes)
	I1210 06:23:17.084026  303393 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1210 06:23:17.091041  303393 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1210 06:23:17.091085  303393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (72364216 bytes)
	I1210 06:23:17.347126  303393 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 06:23:17.355827  303393 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1210 06:23:17.369231  303393 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1210 06:23:17.507127  303393 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2223 bytes)
	I1210 06:23:17.521028  303393 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1210 06:23:17.525041  303393 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:23:17.571860  303393 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:23:17.695993  303393 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:23:17.720015  303393 certs.go:69] Setting up /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/no-preload-713838 for IP: 192.168.103.2
	I1210 06:23:17.720044  303393 certs.go:195] generating shared ca certs ...
	I1210 06:23:17.720066  303393 certs.go:227] acquiring lock for ca certs: {Name:mkfe434cecfa5233603e8d01fb39a21abb4f8ca4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:23:17.720256  303393 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22089-8832/.minikube/ca.key
	I1210 06:23:17.720318  303393 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22089-8832/.minikube/proxy-client-ca.key
	I1210 06:23:17.720329  303393 certs.go:257] generating profile certs ...
	I1210 06:23:17.720405  303393 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/no-preload-713838/client.key
	I1210 06:23:17.720419  303393 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/no-preload-713838/client.crt with IP's: []
	I1210 06:23:17.810923  303393 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/no-preload-713838/client.crt ...
	I1210 06:23:17.810948  303393 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/no-preload-713838/client.crt: {Name:mk420a7bfe1b0f9f7d49f4f6084132b1f0b3b78c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:23:17.811099  303393 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/no-preload-713838/client.key ...
	I1210 06:23:17.811110  303393 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/no-preload-713838/client.key: {Name:mk5d9c5a9a43a6d3653e4cebd5807dd4e4ec1356 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:23:17.811189  303393 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/no-preload-713838/apiserver.key.503be992
	I1210 06:23:17.811202  303393 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/no-preload-713838/apiserver.crt.503be992 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1210 06:23:17.965291  303393 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/no-preload-713838/apiserver.crt.503be992 ...
	I1210 06:23:17.965318  303393 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/no-preload-713838/apiserver.crt.503be992: {Name:mkf5475a683926818f5e5673adfc9b715b6b43dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:23:17.965502  303393 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/no-preload-713838/apiserver.key.503be992 ...
	I1210 06:23:17.965521  303393 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/no-preload-713838/apiserver.key.503be992: {Name:mked574e692413d05ea6f4f725d101d9d91d4bf9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:23:17.965636  303393 certs.go:382] copying /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/no-preload-713838/apiserver.crt.503be992 -> /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/no-preload-713838/apiserver.crt
	I1210 06:23:17.965739  303393 certs.go:386] copying /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/no-preload-713838/apiserver.key.503be992 -> /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/no-preload-713838/apiserver.key
	I1210 06:23:17.965826  303393 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/no-preload-713838/proxy-client.key
	I1210 06:23:17.965863  303393 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/no-preload-713838/proxy-client.crt with IP's: []
	I1210 06:23:18.061181  303393 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/no-preload-713838/proxy-client.crt ...
	I1210 06:23:18.061208  303393 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/no-preload-713838/proxy-client.crt: {Name:mk5fd705de3af9376cc9d9798e805e5fcbd3c0d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:23:18.061370  303393 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/no-preload-713838/proxy-client.key ...
	I1210 06:23:18.061392  303393 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/no-preload-713838/proxy-client.key: {Name:mkac1564303f9b89ca6f563881ac8970e7a60439 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:23:18.061652  303393 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/12374.pem (1338 bytes)
	W1210 06:23:18.061711  303393 certs.go:480] ignoring /home/jenkins/minikube-integration/22089-8832/.minikube/certs/12374_empty.pem, impossibly tiny 0 bytes
	I1210 06:23:18.061727  303393 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca-key.pem (1679 bytes)
	I1210 06:23:18.061767  303393 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem (1078 bytes)
	I1210 06:23:18.061804  303393 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/cert.pem (1123 bytes)
	I1210 06:23:18.061840  303393 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/key.pem (1675 bytes)
	I1210 06:23:18.061903  303393 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/files/etc/ssl/certs/123742.pem (1708 bytes)
	I1210 06:23:18.062484  303393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 06:23:18.083670  303393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 06:23:18.104754  303393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 06:23:18.125010  303393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 06:23:18.145284  303393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/no-preload-713838/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 06:23:18.167774  303393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/no-preload-713838/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 06:23:18.193264  303393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/no-preload-713838/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 06:23:18.213078  303393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/no-preload-713838/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 06:23:18.234259  303393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/files/etc/ssl/certs/123742.pem --> /usr/share/ca-certificates/123742.pem (1708 bytes)
	I1210 06:23:18.256950  303393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 06:23:18.281598  303393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/certs/12374.pem --> /usr/share/ca-certificates/12374.pem (1338 bytes)
	I1210 06:23:18.303879  303393 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 06:23:18.320450  303393 ssh_runner.go:195] Run: openssl version
	I1210 06:23:18.327399  303393 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/123742.pem
	I1210 06:23:18.337069  303393 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/123742.pem /etc/ssl/certs/123742.pem
	I1210 06:23:18.345746  303393 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/123742.pem
	I1210 06:23:18.350441  303393 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:52 /usr/share/ca-certificates/123742.pem
	I1210 06:23:18.350532  303393 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/123742.pem
	I1210 06:23:18.394387  303393 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 06:23:18.402814  303393 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/123742.pem /etc/ssl/certs/3ec20f2e.0
	I1210 06:23:18.411255  303393 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:23:18.419705  303393 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 06:23:18.429884  303393 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:23:18.434210  303393 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:23:18.434264  303393 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:23:18.471108  303393 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 06:23:18.480320  303393 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 06:23:18.489104  303393 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12374.pem
	I1210 06:23:18.498267  303393 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12374.pem /etc/ssl/certs/12374.pem
	I1210 06:23:18.507310  303393 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12374.pem
	I1210 06:23:18.512259  303393 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:52 /usr/share/ca-certificates/12374.pem
	I1210 06:23:18.512327  303393 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12374.pem
	I1210 06:23:18.554898  303393 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 06:23:18.565017  303393 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/12374.pem /etc/ssl/certs/51391683.0
	I1210 06:23:18.575373  303393 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:23:18.579967  303393 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 06:23:18.580023  303393 kubeadm.go:401] StartCluster: {Name:no-preload-713838 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-713838 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:23:18.580105  303393 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 06:23:18.580158  303393 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:23:18.613636  303393 cri.go:89] found id: ""
	I1210 06:23:18.613707  303393 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 06:23:18.622801  303393 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 06:23:18.631620  303393 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 06:23:18.631681  303393 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 06:23:18.641959  303393 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 06:23:18.641977  303393 kubeadm.go:158] found existing configuration files:
	
	I1210 06:23:18.642033  303393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 06:23:18.650940  303393 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 06:23:18.651000  303393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 06:23:18.659345  303393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 06:23:18.667786  303393 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 06:23:18.667855  303393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 06:23:18.676814  303393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 06:23:18.686186  303393 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 06:23:18.686253  303393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 06:23:18.694796  303393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 06:23:18.703577  303393 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 06:23:18.703635  303393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 06:23:18.711997  303393 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 06:23:18.762758  303393 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1210 06:23:18.762832  303393 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 06:23:18.849432  303393 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 06:23:18.849542  303393 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1210 06:23:18.849588  303393 kubeadm.go:319] OS: Linux
	I1210 06:23:18.849652  303393 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 06:23:18.849766  303393 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 06:23:18.849857  303393 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 06:23:18.849933  303393 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 06:23:18.850010  303393 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 06:23:18.850115  303393 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 06:23:18.850211  303393 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 06:23:18.850297  303393 kubeadm.go:319] CGROUPS_IO: enabled
	I1210 06:23:18.920972  303393 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 06:23:18.921108  303393 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 06:23:18.921251  303393 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 06:23:18.936905  303393 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 06:23:17.882971  309386 cli_runner.go:164] Run: docker network inspect embed-certs-133470 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:23:17.903049  309386 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1210 06:23:17.907996  309386 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:23:17.921250  309386 kubeadm.go:884] updating cluster {Name:embed-certs-133470 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-133470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 06:23:17.921403  309386 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 06:23:17.921462  309386 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:23:17.959872  309386 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 06:23:17.959896  309386 crio.go:433] Images already preloaded, skipping extraction
	I1210 06:23:17.959944  309386 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:23:17.989858  309386 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 06:23:17.989884  309386 cache_images.go:86] Images are preloaded, skipping loading
	I1210 06:23:17.989893  309386 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.2 crio true true} ...
	I1210 06:23:17.990042  309386 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-133470 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:embed-certs-133470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 06:23:17.990135  309386 ssh_runner.go:195] Run: crio config
	I1210 06:23:18.048868  309386 cni.go:84] Creating CNI manager for ""
	I1210 06:23:18.048898  309386 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:23:18.048951  309386 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 06:23:18.048986  309386 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-133470 NodeName:embed-certs-133470 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 06:23:18.049161  309386 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-133470"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 06:23:18.049233  309386 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1210 06:23:18.059308  309386 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 06:23:18.059383  309386 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 06:23:18.069285  309386 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1210 06:23:18.085012  309386 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 06:23:18.104575  309386 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1210 06:23:18.119602  309386 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1210 06:23:18.124092  309386 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:23:18.135572  309386 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:23:18.232886  309386 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:23:18.265745  309386 certs.go:69] Setting up /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/embed-certs-133470 for IP: 192.168.94.2
	I1210 06:23:18.265765  309386 certs.go:195] generating shared ca certs ...
	I1210 06:23:18.265782  309386 certs.go:227] acquiring lock for ca certs: {Name:mkfe434cecfa5233603e8d01fb39a21abb4f8ca4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:23:18.265960  309386 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22089-8832/.minikube/ca.key
	I1210 06:23:18.266021  309386 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22089-8832/.minikube/proxy-client-ca.key
	I1210 06:23:18.266044  309386 certs.go:257] generating profile certs ...
	I1210 06:23:18.266112  309386 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/embed-certs-133470/client.key
	I1210 06:23:18.266133  309386 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/embed-certs-133470/client.crt with IP's: []
	I1210 06:23:18.423969  309386 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/embed-certs-133470/client.crt ...
	I1210 06:23:18.423996  309386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/embed-certs-133470/client.crt: {Name:mk7233a184a96be2b536908552901f583afcdb4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:23:18.424181  309386 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/embed-certs-133470/client.key ...
	I1210 06:23:18.424195  309386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/embed-certs-133470/client.key: {Name:mk543c2c780216d3220fc6baca807ee9bccabee4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:23:18.424310  309386 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/embed-certs-133470/apiserver.key.4e8e5ab2
	I1210 06:23:18.424333  309386 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/embed-certs-133470/apiserver.crt.4e8e5ab2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1210 06:23:18.567869  309386 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/embed-certs-133470/apiserver.crt.4e8e5ab2 ...
	I1210 06:23:18.567894  309386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/embed-certs-133470/apiserver.crt.4e8e5ab2: {Name:mk3c95e7f03fe330208f0d00440b72bcfd75a2eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:23:18.568053  309386 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/embed-certs-133470/apiserver.key.4e8e5ab2 ...
	I1210 06:23:18.568065  309386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/embed-certs-133470/apiserver.key.4e8e5ab2: {Name:mkf397140b031bf07603ac8b8e86d92ad8ffe1e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:23:18.568149  309386 certs.go:382] copying /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/embed-certs-133470/apiserver.crt.4e8e5ab2 -> /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/embed-certs-133470/apiserver.crt
	I1210 06:23:18.568218  309386 certs.go:386] copying /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/embed-certs-133470/apiserver.key.4e8e5ab2 -> /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/embed-certs-133470/apiserver.key
	I1210 06:23:18.568280  309386 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/embed-certs-133470/proxy-client.key
	I1210 06:23:18.568295  309386 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/embed-certs-133470/proxy-client.crt with IP's: []
	I1210 06:23:18.683758  309386 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/embed-certs-133470/proxy-client.crt ...
	I1210 06:23:18.683789  309386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/embed-certs-133470/proxy-client.crt: {Name:mke84e02612e7b377e03014cb90543cc50f471a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:23:18.683957  309386 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/embed-certs-133470/proxy-client.key ...
	I1210 06:23:18.683974  309386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/embed-certs-133470/proxy-client.key: {Name:mkc2f3c1de193a2d324a0d6686e6d5ac20e00af2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:23:18.684192  309386 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/12374.pem (1338 bytes)
	W1210 06:23:18.684233  309386 certs.go:480] ignoring /home/jenkins/minikube-integration/22089-8832/.minikube/certs/12374_empty.pem, impossibly tiny 0 bytes
	I1210 06:23:18.684254  309386 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca-key.pem (1679 bytes)
	I1210 06:23:18.684297  309386 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem (1078 bytes)
	I1210 06:23:18.684337  309386 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/cert.pem (1123 bytes)
	I1210 06:23:18.684372  309386 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/key.pem (1675 bytes)
	I1210 06:23:18.684432  309386 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/files/etc/ssl/certs/123742.pem (1708 bytes)
	I1210 06:23:18.685125  309386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 06:23:18.708060  309386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 06:23:18.727571  309386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 06:23:18.748806  309386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 06:23:18.772173  309386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/embed-certs-133470/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1210 06:23:18.797268  309386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/embed-certs-133470/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 06:23:18.818334  309386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/embed-certs-133470/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 06:23:18.838692  309386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/embed-certs-133470/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 06:23:18.859336  309386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 06:23:18.885459  309386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/certs/12374.pem --> /usr/share/ca-certificates/12374.pem (1338 bytes)
	I1210 06:23:18.908730  309386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/files/etc/ssl/certs/123742.pem --> /usr/share/ca-certificates/123742.pem (1708 bytes)
	I1210 06:23:18.930286  309386 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 06:23:18.947413  309386 ssh_runner.go:195] Run: openssl version
	I1210 06:23:18.954078  309386 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:23:18.962543  309386 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 06:23:18.970719  309386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:23:18.974972  309386 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:23:18.975050  309386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:23:19.010667  309386 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 06:23:19.020263  309386 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 06:23:19.029351  309386 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12374.pem
	I1210 06:23:19.038966  309386 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12374.pem /etc/ssl/certs/12374.pem
	I1210 06:23:19.050055  309386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12374.pem
	I1210 06:23:19.054633  309386 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:52 /usr/share/ca-certificates/12374.pem
	I1210 06:23:19.054717  309386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12374.pem
	I1210 06:23:19.092822  309386 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 06:23:19.101302  309386 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/12374.pem /etc/ssl/certs/51391683.0
	I1210 06:23:19.109774  309386 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/123742.pem
	I1210 06:23:19.118661  309386 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/123742.pem /etc/ssl/certs/123742.pem
	I1210 06:23:19.127797  309386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/123742.pem
	I1210 06:23:19.132160  309386 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:52 /usr/share/ca-certificates/123742.pem
	I1210 06:23:19.132223  309386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/123742.pem
	I1210 06:23:19.168483  309386 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 06:23:19.176947  309386 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/123742.pem /etc/ssl/certs/3ec20f2e.0
	I1210 06:23:19.186049  309386 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:23:19.190247  309386 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 06:23:19.190321  309386 kubeadm.go:401] StartCluster: {Name:embed-certs-133470 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-133470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:23:19.190408  309386 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 06:23:19.190489  309386 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:23:19.220351  309386 cri.go:89] found id: ""
	I1210 06:23:19.220427  309386 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 06:23:19.230209  309386 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 06:23:19.239015  309386 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 06:23:19.239070  309386 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 06:23:19.248763  309386 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 06:23:19.248784  309386 kubeadm.go:158] found existing configuration files:
	
	I1210 06:23:19.248842  309386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 06:23:19.258568  309386 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 06:23:19.258637  309386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 06:23:19.268650  309386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 06:23:19.277937  309386 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 06:23:19.278003  309386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 06:23:19.287004  309386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 06:23:19.297146  309386 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 06:23:19.297206  309386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 06:23:19.307656  309386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 06:23:19.316139  309386 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 06:23:19.316217  309386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 06:23:19.324333  309386 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	W1210 06:23:16.291735  294283 node_ready.go:57] node "old-k8s-version-424086" has "Ready":"False" status (will retry)
	W1210 06:23:18.790535  294283 node_ready.go:57] node "old-k8s-version-424086" has "Ready":"False" status (will retry)
	I1210 06:23:18.941194  303393 out.go:252]   - Generating certificates and keys ...
	I1210 06:23:18.941296  303393 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 06:23:18.941374  303393 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 06:23:19.077740  303393 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 06:23:19.179369  303393 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 06:23:19.314613  303393 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 06:23:19.443566  303393 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 06:23:19.540430  303393 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 06:23:19.540683  303393 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-713838] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1210 06:23:19.674998  303393 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 06:23:19.675214  303393 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-713838] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1210 06:23:19.789960  303393 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 06:23:19.832266  303393 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 06:23:19.866492  303393 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 06:23:19.866595  303393 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 06:23:19.922024  303393 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 06:23:20.018404  303393 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 06:23:20.081458  303393 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 06:23:20.217353  303393 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 06:23:20.286659  303393 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 06:23:20.287212  303393 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 06:23:20.345686  303393 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 06:23:20.421752  303393 out.go:252]   - Booting up control plane ...
	I1210 06:23:20.421880  303393 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 06:23:20.422000  303393 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 06:23:20.422060  303393 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 06:23:20.422217  303393 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 06:23:20.422366  303393 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 06:23:20.422541  303393 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 06:23:20.422680  303393 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 06:23:20.422747  303393 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 06:23:20.485531  303393 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 06:23:20.485693  303393 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 06:23:16.187957  314350 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1210 06:23:16.188276  314350 start.go:159] libmachine.API.Create for "default-k8s-diff-port-643991" (driver="docker")
	I1210 06:23:16.188320  314350 client.go:173] LocalClient.Create starting
	I1210 06:23:16.188417  314350 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem
	I1210 06:23:16.188457  314350 main.go:143] libmachine: Decoding PEM data...
	I1210 06:23:16.188505  314350 main.go:143] libmachine: Parsing certificate...
	I1210 06:23:16.188598  314350 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22089-8832/.minikube/certs/cert.pem
	I1210 06:23:16.188635  314350 main.go:143] libmachine: Decoding PEM data...
	I1210 06:23:16.188667  314350 main.go:143] libmachine: Parsing certificate...
	I1210 06:23:16.189160  314350 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-643991 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 06:23:16.214378  314350 cli_runner.go:211] docker network inspect default-k8s-diff-port-643991 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 06:23:16.214464  314350 network_create.go:284] running [docker network inspect default-k8s-diff-port-643991] to gather additional debugging logs...
	I1210 06:23:16.214512  314350 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-643991
	W1210 06:23:16.241236  314350 cli_runner.go:211] docker network inspect default-k8s-diff-port-643991 returned with exit code 1
	I1210 06:23:16.241271  314350 network_create.go:287] error running [docker network inspect default-k8s-diff-port-643991]: docker network inspect default-k8s-diff-port-643991: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-643991 not found
	I1210 06:23:16.241290  314350 network_create.go:289] output of [docker network inspect default-k8s-diff-port-643991]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-643991 not found
	
	** /stderr **
	I1210 06:23:16.241419  314350 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:23:16.268651  314350 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-93569dd44e03 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:22:34:6b:89:a0:37} reservation:<nil>}
	I1210 06:23:16.269633  314350 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-2fbfa5ca31a8 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:6a:30:9e:0a:da:73} reservation:<nil>}
	I1210 06:23:16.270774  314350 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-68b4fc4b224b IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:0a:d7:21:69:83} reservation:<nil>}
	I1210 06:23:16.272298  314350 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00170be50}
	I1210 06:23:16.272340  314350 network_create.go:124] attempt to create docker network default-k8s-diff-port-643991 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1210 06:23:16.272410  314350 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-643991 default-k8s-diff-port-643991
	I1210 06:23:16.338291  314350 network_create.go:108] docker network default-k8s-diff-port-643991 192.168.76.0/24 created
	I1210 06:23:16.338324  314350 kic.go:121] calculated static IP "192.168.76.2" for the "default-k8s-diff-port-643991" container
	I1210 06:23:16.338386  314350 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 06:23:16.362813  314350 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-643991 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-643991 --label created_by.minikube.sigs.k8s.io=true
	I1210 06:23:16.388711  314350 oci.go:103] Successfully created a docker volume default-k8s-diff-port-643991
	I1210 06:23:16.388818  314350 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-643991-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-643991 --entrypoint /usr/bin/test -v default-k8s-diff-port-643991:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca -d /var/lib
	I1210 06:23:17.895090  314350 cli_runner.go:217] Completed: docker run --rm --name default-k8s-diff-port-643991-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-643991 --entrypoint /usr/bin/test -v default-k8s-diff-port-643991:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca -d /var/lib: (1.506229534s)
	I1210 06:23:17.895139  314350 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-643991
	I1210 06:23:17.895206  314350 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 06:23:17.895224  314350 kic.go:194] Starting extracting preloaded images to volume ...
	I1210 06:23:17.895287  314350 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22089-8832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-643991:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca -I lz4 -xf /preloaded.tar -C /extractDir
	I1210 06:23:19.394517  309386 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1210 06:23:19.470609  309386 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1210 06:23:20.791049  294283 node_ready.go:57] node "old-k8s-version-424086" has "Ready":"False" status (will retry)
	W1210 06:23:22.791841  294283 node_ready.go:57] node "old-k8s-version-424086" has "Ready":"False" status (will retry)
	I1210 06:23:24.290641  294283 node_ready.go:49] node "old-k8s-version-424086" is "Ready"
	I1210 06:23:24.290674  294283 node_ready.go:38] duration metric: took 12.503526753s for node "old-k8s-version-424086" to be "Ready" ...
	I1210 06:23:24.290690  294283 api_server.go:52] waiting for apiserver process to appear ...
	I1210 06:23:24.290745  294283 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:23:24.304386  294283 api_server.go:72] duration metric: took 13.193953961s to wait for apiserver process to appear ...
	I1210 06:23:24.304417  294283 api_server.go:88] waiting for apiserver healthz status ...
	I1210 06:23:24.304439  294283 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1210 06:23:24.309892  294283 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1210 06:23:24.311105  294283 api_server.go:141] control plane version: v1.28.0
	I1210 06:23:24.311129  294283 api_server.go:131] duration metric: took 6.704785ms to wait for apiserver health ...
	I1210 06:23:24.311143  294283 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 06:23:24.315273  294283 system_pods.go:59] 8 kube-system pods found
	I1210 06:23:24.315317  294283 system_pods.go:61] "coredns-5dd5756b68-gmssk" [543e9066-3bdb-41ea-a1dc-b1295d461b67] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 06:23:24.315327  294283 system_pods.go:61] "etcd-old-k8s-version-424086" [857d0be7-758d-4342-b24c-e41d82e2bc6a] Running
	I1210 06:23:24.315334  294283 system_pods.go:61] "kindnet-2qg8n" [26a29cbd-d651-4065-a0d1-299e813902ae] Running
	I1210 06:23:24.315340  294283 system_pods.go:61] "kube-apiserver-old-k8s-version-424086" [6375c645-8acb-4cfe-84c3-335dffc3c875] Running
	I1210 06:23:24.315345  294283 system_pods.go:61] "kube-controller-manager-old-k8s-version-424086" [7745d9a2-c327-4adc-8165-3dcfa743f0df] Running
	I1210 06:23:24.315350  294283 system_pods.go:61] "kube-proxy-v9pgf" [824adde6-eb4e-4d39-a17e-61b3d946415d] Running
	I1210 06:23:24.315354  294283 system_pods.go:61] "kube-scheduler-old-k8s-version-424086" [bd321bb5-770c-4efc-b06a-bb25a4be83cd] Running
	I1210 06:23:24.315366  294283 system_pods.go:61] "storage-provisioner" [6d743349-7ed7-4b69-86ac-9f45fc3c5ab9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 06:23:24.315378  294283 system_pods.go:74] duration metric: took 4.227562ms to wait for pod list to return data ...
	I1210 06:23:24.315385  294283 default_sa.go:34] waiting for default service account to be created ...
	I1210 06:23:24.317979  294283 default_sa.go:45] found service account: "default"
	I1210 06:23:24.318002  294283 default_sa.go:55] duration metric: took 2.611784ms for default service account to be created ...
	I1210 06:23:24.318010  294283 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 06:23:24.321452  294283 system_pods.go:86] 8 kube-system pods found
	I1210 06:23:24.321514  294283 system_pods.go:89] "coredns-5dd5756b68-gmssk" [543e9066-3bdb-41ea-a1dc-b1295d461b67] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 06:23:24.321524  294283 system_pods.go:89] "etcd-old-k8s-version-424086" [857d0be7-758d-4342-b24c-e41d82e2bc6a] Running
	I1210 06:23:24.321531  294283 system_pods.go:89] "kindnet-2qg8n" [26a29cbd-d651-4065-a0d1-299e813902ae] Running
	I1210 06:23:24.321535  294283 system_pods.go:89] "kube-apiserver-old-k8s-version-424086" [6375c645-8acb-4cfe-84c3-335dffc3c875] Running
	I1210 06:23:24.321538  294283 system_pods.go:89] "kube-controller-manager-old-k8s-version-424086" [7745d9a2-c327-4adc-8165-3dcfa743f0df] Running
	I1210 06:23:24.321542  294283 system_pods.go:89] "kube-proxy-v9pgf" [824adde6-eb4e-4d39-a17e-61b3d946415d] Running
	I1210 06:23:24.321545  294283 system_pods.go:89] "kube-scheduler-old-k8s-version-424086" [bd321bb5-770c-4efc-b06a-bb25a4be83cd] Running
	I1210 06:23:24.321550  294283 system_pods.go:89] "storage-provisioner" [6d743349-7ed7-4b69-86ac-9f45fc3c5ab9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 06:23:24.321571  294283 retry.go:31] will retry after 295.38409ms: missing components: kube-dns
	I1210 06:23:24.622108  294283 system_pods.go:86] 8 kube-system pods found
	I1210 06:23:24.622152  294283 system_pods.go:89] "coredns-5dd5756b68-gmssk" [543e9066-3bdb-41ea-a1dc-b1295d461b67] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 06:23:24.622157  294283 system_pods.go:89] "etcd-old-k8s-version-424086" [857d0be7-758d-4342-b24c-e41d82e2bc6a] Running
	I1210 06:23:24.622163  294283 system_pods.go:89] "kindnet-2qg8n" [26a29cbd-d651-4065-a0d1-299e813902ae] Running
	I1210 06:23:24.622167  294283 system_pods.go:89] "kube-apiserver-old-k8s-version-424086" [6375c645-8acb-4cfe-84c3-335dffc3c875] Running
	I1210 06:23:24.622171  294283 system_pods.go:89] "kube-controller-manager-old-k8s-version-424086" [7745d9a2-c327-4adc-8165-3dcfa743f0df] Running
	I1210 06:23:24.622174  294283 system_pods.go:89] "kube-proxy-v9pgf" [824adde6-eb4e-4d39-a17e-61b3d946415d] Running
	I1210 06:23:24.622177  294283 system_pods.go:89] "kube-scheduler-old-k8s-version-424086" [bd321bb5-770c-4efc-b06a-bb25a4be83cd] Running
	I1210 06:23:24.622181  294283 system_pods.go:89] "storage-provisioner" [6d743349-7ed7-4b69-86ac-9f45fc3c5ab9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 06:23:24.622194  294283 retry.go:31] will retry after 264.611149ms: missing components: kube-dns
	I1210 06:23:24.891861  294283 system_pods.go:86] 8 kube-system pods found
	I1210 06:23:24.891899  294283 system_pods.go:89] "coredns-5dd5756b68-gmssk" [543e9066-3bdb-41ea-a1dc-b1295d461b67] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 06:23:24.891910  294283 system_pods.go:89] "etcd-old-k8s-version-424086" [857d0be7-758d-4342-b24c-e41d82e2bc6a] Running
	I1210 06:23:24.891917  294283 system_pods.go:89] "kindnet-2qg8n" [26a29cbd-d651-4065-a0d1-299e813902ae] Running
	I1210 06:23:24.891927  294283 system_pods.go:89] "kube-apiserver-old-k8s-version-424086" [6375c645-8acb-4cfe-84c3-335dffc3c875] Running
	I1210 06:23:24.891933  294283 system_pods.go:89] "kube-controller-manager-old-k8s-version-424086" [7745d9a2-c327-4adc-8165-3dcfa743f0df] Running
	I1210 06:23:24.891938  294283 system_pods.go:89] "kube-proxy-v9pgf" [824adde6-eb4e-4d39-a17e-61b3d946415d] Running
	I1210 06:23:24.891943  294283 system_pods.go:89] "kube-scheduler-old-k8s-version-424086" [bd321bb5-770c-4efc-b06a-bb25a4be83cd] Running
	I1210 06:23:24.891951  294283 system_pods.go:89] "storage-provisioner" [6d743349-7ed7-4b69-86ac-9f45fc3c5ab9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 06:23:24.891972  294283 retry.go:31] will retry after 470.230643ms: missing components: kube-dns
	I1210 06:23:21.487017  303393 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001665836s
	I1210 06:23:21.491671  303393 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1210 06:23:21.491790  303393 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1210 06:23:21.491932  303393 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1210 06:23:21.492054  303393 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1210 06:23:22.497898  303393 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.006155858s
	I1210 06:23:24.104943  303393 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.61297098s
	I1210 06:23:22.051715  314350 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22089-8832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-643991:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca -I lz4 -xf /preloaded.tar -C /extractDir: (4.156372759s)
	I1210 06:23:22.051752  314350 kic.go:203] duration metric: took 4.15652478s to extract preloaded images to volume ...
	W1210 06:23:22.051860  314350 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1210 06:23:22.051903  314350 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1210 06:23:22.051957  314350 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 06:23:22.142335  314350 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-643991 --name default-k8s-diff-port-643991 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-643991 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-643991 --network default-k8s-diff-port-643991 --ip 192.168.76.2 --volume default-k8s-diff-port-643991:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca
	I1210 06:23:22.465150  314350 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-643991 --format={{.State.Running}}
	I1210 06:23:22.485703  314350 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-643991 --format={{.State.Status}}
	I1210 06:23:22.507049  314350 cli_runner.go:164] Run: docker exec default-k8s-diff-port-643991 stat /var/lib/dpkg/alternatives/iptables
	I1210 06:23:22.554998  314350 oci.go:144] the created container "default-k8s-diff-port-643991" has a running status.
	I1210 06:23:22.555032  314350 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22089-8832/.minikube/machines/default-k8s-diff-port-643991/id_rsa...
	I1210 06:23:22.619236  314350 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22089-8832/.minikube/machines/default-k8s-diff-port-643991/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 06:23:22.653524  314350 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-643991 --format={{.State.Status}}
	I1210 06:23:22.675527  314350 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 06:23:22.675550  314350 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-643991 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 06:23:22.722794  314350 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-643991 --format={{.State.Status}}
	I1210 06:23:22.748579  314350 machine.go:94] provisionDockerMachine start ...
	I1210 06:23:22.748694  314350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-643991
	I1210 06:23:22.772791  314350 main.go:143] libmachine: Using SSH client type: native
	I1210 06:23:22.773115  314350 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33109 <nil> <nil>}
	I1210 06:23:22.773143  314350 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 06:23:22.773870  314350 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56264->127.0.0.1:33109: read: connection reset by peer
	I1210 06:23:25.915672  314350 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-643991
	
	I1210 06:23:25.915700  314350 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-643991"
	I1210 06:23:25.915778  314350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-643991
	I1210 06:23:25.939336  314350 main.go:143] libmachine: Using SSH client type: native
	I1210 06:23:25.939690  314350 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33109 <nil> <nil>}
	I1210 06:23:25.939711  314350 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-643991 && echo "default-k8s-diff-port-643991" | sudo tee /etc/hostname
	I1210 06:23:25.993457  303393 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.501724202s
	I1210 06:23:26.016523  303393 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 06:23:26.028580  303393 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 06:23:26.040771  303393 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 06:23:26.041113  303393 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-713838 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 06:23:26.050903  303393 kubeadm.go:319] [bootstrap-token] Using token: afcn9x.8xtb8z6h8rcy7yy6
	I1210 06:23:25.367423  294283 system_pods.go:86] 8 kube-system pods found
	I1210 06:23:25.367461  294283 system_pods.go:89] "coredns-5dd5756b68-gmssk" [543e9066-3bdb-41ea-a1dc-b1295d461b67] Running
	I1210 06:23:25.367482  294283 system_pods.go:89] "etcd-old-k8s-version-424086" [857d0be7-758d-4342-b24c-e41d82e2bc6a] Running
	I1210 06:23:25.367489  294283 system_pods.go:89] "kindnet-2qg8n" [26a29cbd-d651-4065-a0d1-299e813902ae] Running
	I1210 06:23:25.367495  294283 system_pods.go:89] "kube-apiserver-old-k8s-version-424086" [6375c645-8acb-4cfe-84c3-335dffc3c875] Running
	I1210 06:23:25.367501  294283 system_pods.go:89] "kube-controller-manager-old-k8s-version-424086" [7745d9a2-c327-4adc-8165-3dcfa743f0df] Running
	I1210 06:23:25.367506  294283 system_pods.go:89] "kube-proxy-v9pgf" [824adde6-eb4e-4d39-a17e-61b3d946415d] Running
	I1210 06:23:25.367512  294283 system_pods.go:89] "kube-scheduler-old-k8s-version-424086" [bd321bb5-770c-4efc-b06a-bb25a4be83cd] Running
	I1210 06:23:25.367517  294283 system_pods.go:89] "storage-provisioner" [6d743349-7ed7-4b69-86ac-9f45fc3c5ab9] Running
	I1210 06:23:25.367526  294283 system_pods.go:126] duration metric: took 1.049508998s to wait for k8s-apps to be running ...
	I1210 06:23:25.367539  294283 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 06:23:25.367595  294283 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:23:25.383857  294283 system_svc.go:56] duration metric: took 16.30762ms WaitForService to wait for kubelet
	I1210 06:23:25.383900  294283 kubeadm.go:587] duration metric: took 14.273460548s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 06:23:25.383926  294283 node_conditions.go:102] verifying NodePressure condition ...
	I1210 06:23:25.387047  294283 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1210 06:23:25.387082  294283 node_conditions.go:123] node cpu capacity is 8
	I1210 06:23:25.387103  294283 node_conditions.go:105] duration metric: took 3.170389ms to run NodePressure ...
	I1210 06:23:25.387119  294283 start.go:242] waiting for startup goroutines ...
	I1210 06:23:25.387129  294283 start.go:247] waiting for cluster config update ...
	I1210 06:23:25.387152  294283 start.go:256] writing updated cluster config ...
	I1210 06:23:25.387464  294283 ssh_runner.go:195] Run: rm -f paused
	I1210 06:23:25.392021  294283 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 06:23:25.396451  294283 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-gmssk" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:23:25.401700  294283 pod_ready.go:94] pod "coredns-5dd5756b68-gmssk" is "Ready"
	I1210 06:23:25.401723  294283 pod_ready.go:86] duration metric: took 5.237319ms for pod "coredns-5dd5756b68-gmssk" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:23:25.405180  294283 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-424086" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:23:25.410059  294283 pod_ready.go:94] pod "etcd-old-k8s-version-424086" is "Ready"
	I1210 06:23:25.410084  294283 pod_ready.go:86] duration metric: took 4.874462ms for pod "etcd-old-k8s-version-424086" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:23:25.413278  294283 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-424086" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:23:25.418423  294283 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-424086" is "Ready"
	I1210 06:23:25.418455  294283 pod_ready.go:86] duration metric: took 5.145973ms for pod "kube-apiserver-old-k8s-version-424086" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:23:25.421493  294283 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-424086" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:23:25.796705  294283 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-424086" is "Ready"
	I1210 06:23:25.796734  294283 pod_ready.go:86] duration metric: took 375.214168ms for pod "kube-controller-manager-old-k8s-version-424086" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:23:25.997515  294283 pod_ready.go:83] waiting for pod "kube-proxy-v9pgf" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:23:26.397539  294283 pod_ready.go:94] pod "kube-proxy-v9pgf" is "Ready"
	I1210 06:23:26.397570  294283 pod_ready.go:86] duration metric: took 400.030894ms for pod "kube-proxy-v9pgf" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:23:26.598751  294283 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-424086" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:23:26.996905  294283 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-424086" is "Ready"
	I1210 06:23:26.996937  294283 pod_ready.go:86] duration metric: took 398.156149ms for pod "kube-scheduler-old-k8s-version-424086" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:23:26.996952  294283 pod_ready.go:40] duration metric: took 1.604895278s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 06:23:27.050616  294283 start.go:625] kubectl: 1.34.3, cluster: 1.28.0 (minor skew: 6)
	I1210 06:23:27.053277  294283 out.go:203] 
	W1210 06:23:27.054897  294283 out.go:285] ! /usr/local/bin/kubectl is version 1.34.3, which may have incompatibilities with Kubernetes 1.28.0.
	I1210 06:23:27.056411  294283 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1210 06:23:27.057709  294283 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-424086" cluster and "default" namespace by default
	I1210 06:23:26.052521  303393 out.go:252]   - Configuring RBAC rules ...
	I1210 06:23:26.052684  303393 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 06:23:26.056668  303393 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 06:23:26.062795  303393 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 06:23:26.066903  303393 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 06:23:26.070390  303393 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 06:23:26.074004  303393 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 06:23:26.401218  303393 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 06:23:26.821695  303393 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1210 06:23:27.402286  303393 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1210 06:23:27.402506  303393 kubeadm.go:319] 
	I1210 06:23:27.402601  303393 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1210 06:23:27.402612  303393 kubeadm.go:319] 
	I1210 06:23:27.402704  303393 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1210 06:23:27.402721  303393 kubeadm.go:319] 
	I1210 06:23:27.402752  303393 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1210 06:23:27.402829  303393 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 06:23:27.402896  303393 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 06:23:27.402901  303393 kubeadm.go:319] 
	I1210 06:23:27.402968  303393 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1210 06:23:27.402973  303393 kubeadm.go:319] 
	I1210 06:23:27.403031  303393 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 06:23:27.403037  303393 kubeadm.go:319] 
	I1210 06:23:27.403101  303393 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1210 06:23:27.403193  303393 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 06:23:27.403278  303393 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 06:23:27.403283  303393 kubeadm.go:319] 
	I1210 06:23:27.403393  303393 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 06:23:27.403499  303393 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1210 06:23:27.403506  303393 kubeadm.go:319] 
	I1210 06:23:27.403710  303393 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token afcn9x.8xtb8z6h8rcy7yy6 \
	I1210 06:23:27.403870  303393 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:63e262019a0228173b835d7feaf739daf8c2f986042fc20415163ebad5fe89a5 \
	I1210 06:23:27.403920  303393 kubeadm.go:319] 	--control-plane 
	I1210 06:23:27.403929  303393 kubeadm.go:319] 
	I1210 06:23:27.404094  303393 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1210 06:23:27.404104  303393 kubeadm.go:319] 
	I1210 06:23:27.404232  303393 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token afcn9x.8xtb8z6h8rcy7yy6 \
	I1210 06:23:27.404386  303393 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:63e262019a0228173b835d7feaf739daf8c2f986042fc20415163ebad5fe89a5 
	I1210 06:23:27.407556  303393 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1210 06:23:27.407709  303393 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 06:23:27.407740  303393 cni.go:84] Creating CNI manager for ""
	I1210 06:23:27.407751  303393 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:23:27.413591  303393 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1210 06:23:26.099414  314350 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-643991
	
	I1210 06:23:26.099517  314350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-643991
	I1210 06:23:26.125190  314350 main.go:143] libmachine: Using SSH client type: native
	I1210 06:23:26.125485  314350 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33109 <nil> <nil>}
	I1210 06:23:26.125516  314350 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-643991' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-643991/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-643991' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 06:23:26.270649  314350 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:23:26.270678  314350 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22089-8832/.minikube CaCertPath:/home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22089-8832/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22089-8832/.minikube}
	I1210 06:23:26.270719  314350 ubuntu.go:190] setting up certificates
	I1210 06:23:26.270730  314350 provision.go:84] configureAuth start
	I1210 06:23:26.270792  314350 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-643991
	I1210 06:23:26.292243  314350 provision.go:143] copyHostCerts
	I1210 06:23:26.292312  314350 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-8832/.minikube/ca.pem, removing ...
	I1210 06:23:26.292366  314350 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-8832/.minikube/ca.pem
	I1210 06:23:26.292455  314350 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22089-8832/.minikube/ca.pem (1078 bytes)
	I1210 06:23:26.292593  314350 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-8832/.minikube/cert.pem, removing ...
	I1210 06:23:26.292606  314350 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-8832/.minikube/cert.pem
	I1210 06:23:26.292649  314350 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22089-8832/.minikube/cert.pem (1123 bytes)
	I1210 06:23:26.292750  314350 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-8832/.minikube/key.pem, removing ...
	I1210 06:23:26.292760  314350 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-8832/.minikube/key.pem
	I1210 06:23:26.292798  314350 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22089-8832/.minikube/key.pem (1675 bytes)
	I1210 06:23:26.292896  314350 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22089-8832/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-643991 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-643991 localhost minikube]
	I1210 06:23:26.459635  314350 provision.go:177] copyRemoteCerts
	I1210 06:23:26.459793  314350 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 06:23:26.459849  314350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-643991
	I1210 06:23:26.484084  314350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/default-k8s-diff-port-643991/id_rsa Username:docker}
	I1210 06:23:26.585296  314350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 06:23:26.616231  314350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1210 06:23:26.641190  314350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 06:23:26.662623  314350 provision.go:87] duration metric: took 391.866625ms to configureAuth
	I1210 06:23:26.662657  314350 ubuntu.go:206] setting minikube options for container-runtime
	I1210 06:23:26.662828  314350 config.go:182] Loaded profile config "default-k8s-diff-port-643991": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 06:23:26.662924  314350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-643991
	I1210 06:23:26.686742  314350 main.go:143] libmachine: Using SSH client type: native
	I1210 06:23:26.687032  314350 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33109 <nil> <nil>}
	I1210 06:23:26.687054  314350 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 06:23:27.000933  314350 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 06:23:27.000960  314350 machine.go:97] duration metric: took 4.252354582s to provisionDockerMachine
	I1210 06:23:27.000973  314350 client.go:176] duration metric: took 10.812643035s to LocalClient.Create
	I1210 06:23:27.001063  314350 start.go:167] duration metric: took 10.812717381s to libmachine.API.Create "default-k8s-diff-port-643991"
	I1210 06:23:27.001093  314350 start.go:293] postStartSetup for "default-k8s-diff-port-643991" (driver="docker")
	I1210 06:23:27.001108  314350 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 06:23:27.001187  314350 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 06:23:27.001236  314350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-643991
	I1210 06:23:27.024545  314350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/default-k8s-diff-port-643991/id_rsa Username:docker}
	I1210 06:23:27.128726  314350 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 06:23:27.132838  314350 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 06:23:27.132869  314350 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 06:23:27.132880  314350 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-8832/.minikube/addons for local assets ...
	I1210 06:23:27.132932  314350 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-8832/.minikube/files for local assets ...
	I1210 06:23:27.133046  314350 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-8832/.minikube/files/etc/ssl/certs/123742.pem -> 123742.pem in /etc/ssl/certs
	I1210 06:23:27.133186  314350 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 06:23:27.142197  314350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/files/etc/ssl/certs/123742.pem --> /etc/ssl/certs/123742.pem (1708 bytes)
	I1210 06:23:27.166841  314350 start.go:296] duration metric: took 165.731521ms for postStartSetup
	I1210 06:23:27.167349  314350 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-643991
	I1210 06:23:27.189891  314350 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/default-k8s-diff-port-643991/config.json ...
	I1210 06:23:27.190262  314350 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:23:27.190331  314350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-643991
	I1210 06:23:27.214556  314350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/default-k8s-diff-port-643991/id_rsa Username:docker}
	I1210 06:23:27.317986  314350 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 06:23:27.323952  314350 start.go:128] duration metric: took 11.138660305s to createHost
	I1210 06:23:27.323981  314350 start.go:83] releasing machines lock for "default-k8s-diff-port-643991", held for 11.138811478s
	I1210 06:23:27.324050  314350 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-643991
	I1210 06:23:27.346289  314350 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 06:23:27.346324  314350 ssh_runner.go:195] Run: cat /version.json
	I1210 06:23:27.346384  314350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-643991
	I1210 06:23:27.346384  314350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-643991
	I1210 06:23:27.370523  314350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/default-k8s-diff-port-643991/id_rsa Username:docker}
	I1210 06:23:27.370943  314350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/default-k8s-diff-port-643991/id_rsa Username:docker}
	I1210 06:23:27.534740  314350 ssh_runner.go:195] Run: systemctl --version
	I1210 06:23:27.542925  314350 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 06:23:27.605361  314350 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 06:23:27.610996  314350 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 06:23:27.611075  314350 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 06:23:27.648457  314350 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 06:23:27.648515  314350 start.go:496] detecting cgroup driver to use...
	I1210 06:23:27.648553  314350 detect.go:190] detected "systemd" cgroup driver on host os
	I1210 06:23:27.648604  314350 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 06:23:27.671479  314350 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 06:23:27.689956  314350 docker.go:218] disabling cri-docker service (if available) ...
	I1210 06:23:27.690028  314350 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 06:23:27.710209  314350 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 06:23:27.736777  314350 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 06:23:27.863544  314350 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 06:23:27.970140  314350 docker.go:234] disabling docker service ...
	I1210 06:23:27.970202  314350 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 06:23:28.008742  314350 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 06:23:28.023654  314350 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 06:23:28.124488  314350 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 06:23:28.226076  314350 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 06:23:28.242110  314350 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:23:28.263589  314350 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 06:23:28.263656  314350 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:23:28.280136  314350 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1210 06:23:28.280219  314350 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:23:28.294071  314350 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:23:28.307550  314350 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:23:28.317503  314350 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 06:23:28.327252  314350 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:23:28.337078  314350 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:23:28.352973  314350 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:23:28.374964  314350 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 06:23:28.387566  314350 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 06:23:28.403956  314350 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:23:28.503283  314350 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 06:23:28.676705  314350 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 06:23:28.676773  314350 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 06:23:28.680914  314350 start.go:564] Will wait 60s for crictl version
	I1210 06:23:28.680975  314350 ssh_runner.go:195] Run: which crictl
	I1210 06:23:28.685112  314350 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 06:23:28.716228  314350 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 06:23:28.716312  314350 ssh_runner.go:195] Run: crio --version
	I1210 06:23:28.763308  314350 ssh_runner.go:195] Run: crio --version
	I1210 06:23:28.820986  314350 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1210 06:23:28.822853  314350 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-643991 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:23:28.847572  314350 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1210 06:23:28.853235  314350 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:23:28.866192  314350 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-643991 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-643991 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 06:23:28.866356  314350 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 06:23:28.866427  314350 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:23:28.911637  314350 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 06:23:28.911665  314350 crio.go:433] Images already preloaded, skipping extraction
	I1210 06:23:28.911722  314350 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:23:28.945112  314350 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 06:23:28.945133  314350 cache_images.go:86] Images are preloaded, skipping loading
	I1210 06:23:28.945140  314350 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.34.2 crio true true} ...
	I1210 06:23:28.945272  314350 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-643991 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-643991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 06:23:28.945340  314350 ssh_runner.go:195] Run: crio config
	I1210 06:23:28.996024  314350 cni.go:84] Creating CNI manager for ""
	I1210 06:23:28.996055  314350 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:23:28.996078  314350 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 06:23:28.996105  314350 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-643991 NodeName:default-k8s-diff-port-643991 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 06:23:28.996262  314350 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-643991"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 06:23:28.996333  314350 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1210 06:23:29.005855  314350 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 06:23:29.005923  314350 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 06:23:29.014956  314350 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1210 06:23:29.029794  314350 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 06:23:29.046587  314350 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1210 06:23:29.060715  314350 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1210 06:23:29.064675  314350 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:23:29.076181  314350 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:23:29.160845  314350 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:23:29.185291  314350 certs.go:69] Setting up /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/default-k8s-diff-port-643991 for IP: 192.168.76.2
	I1210 06:23:29.185309  314350 certs.go:195] generating shared ca certs ...
	I1210 06:23:29.185324  314350 certs.go:227] acquiring lock for ca certs: {Name:mkfe434cecfa5233603e8d01fb39a21abb4f8ca4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:23:29.185490  314350 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22089-8832/.minikube/ca.key
	I1210 06:23:29.185557  314350 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22089-8832/.minikube/proxy-client-ca.key
	I1210 06:23:29.185569  314350 certs.go:257] generating profile certs ...
	I1210 06:23:29.185631  314350 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/default-k8s-diff-port-643991/client.key
	I1210 06:23:29.185658  314350 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/default-k8s-diff-port-643991/client.crt with IP's: []
	I1210 06:23:29.383648  314350 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/default-k8s-diff-port-643991/client.crt ...
	I1210 06:23:29.383678  314350 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/default-k8s-diff-port-643991/client.crt: {Name:mkcb36d2f30f1ec8c20abec169afd61419406947 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:23:29.383898  314350 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/default-k8s-diff-port-643991/client.key ...
	I1210 06:23:29.383920  314350 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/default-k8s-diff-port-643991/client.key: {Name:mkd5d19018a9a239561446e398c2dd08ab96534a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:23:29.384037  314350 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/default-k8s-diff-port-643991/apiserver.key.a53e5786
	I1210 06:23:29.384058  314350 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/default-k8s-diff-port-643991/apiserver.crt.a53e5786 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1210 06:23:29.453966  314350 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/default-k8s-diff-port-643991/apiserver.crt.a53e5786 ...
	I1210 06:23:29.453997  314350 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/default-k8s-diff-port-643991/apiserver.crt.a53e5786: {Name:mkda8e5136643cd5f82fbf5fc7c944c2583a779b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:23:29.454198  314350 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/default-k8s-diff-port-643991/apiserver.key.a53e5786 ...
	I1210 06:23:29.454219  314350 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/default-k8s-diff-port-643991/apiserver.key.a53e5786: {Name:mkadf02cb89d755febd38429abd89c87ff2f2e74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:23:29.454343  314350 certs.go:382] copying /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/default-k8s-diff-port-643991/apiserver.crt.a53e5786 -> /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/default-k8s-diff-port-643991/apiserver.crt
	I1210 06:23:29.454452  314350 certs.go:386] copying /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/default-k8s-diff-port-643991/apiserver.key.a53e5786 -> /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/default-k8s-diff-port-643991/apiserver.key
	I1210 06:23:29.454553  314350 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/default-k8s-diff-port-643991/proxy-client.key
	I1210 06:23:29.454577  314350 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/default-k8s-diff-port-643991/proxy-client.crt with IP's: []
	I1210 06:23:29.548920  314350 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/default-k8s-diff-port-643991/proxy-client.crt ...
	I1210 06:23:29.548949  314350 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/default-k8s-diff-port-643991/proxy-client.crt: {Name:mk8b1ec89ff10f10fb69e62f97c8478885a44159 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:23:29.549144  314350 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/default-k8s-diff-port-643991/proxy-client.key ...
	I1210 06:23:29.549164  314350 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/default-k8s-diff-port-643991/proxy-client.key: {Name:mk39d5b3607d784121839258f9f2a726de110418 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:23:29.549412  314350 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/12374.pem (1338 bytes)
	W1210 06:23:29.549478  314350 certs.go:480] ignoring /home/jenkins/minikube-integration/22089-8832/.minikube/certs/12374_empty.pem, impossibly tiny 0 bytes
	I1210 06:23:29.549492  314350 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca-key.pem (1679 bytes)
	I1210 06:23:29.549531  314350 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem (1078 bytes)
	I1210 06:23:29.549572  314350 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/cert.pem (1123 bytes)
	I1210 06:23:29.549611  314350 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/key.pem (1675 bytes)
	I1210 06:23:29.549675  314350 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/files/etc/ssl/certs/123742.pem (1708 bytes)
	I1210 06:23:29.550280  314350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 06:23:29.573662  314350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 06:23:29.595437  314350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 06:23:29.617937  314350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 06:23:29.640539  314350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/default-k8s-diff-port-643991/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1210 06:23:29.663570  314350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/default-k8s-diff-port-643991/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 06:23:29.685179  314350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/default-k8s-diff-port-643991/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 06:23:29.706500  314350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/default-k8s-diff-port-643991/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 06:23:29.729549  314350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/certs/12374.pem --> /usr/share/ca-certificates/12374.pem (1338 bytes)
	I1210 06:23:29.753445  314350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/files/etc/ssl/certs/123742.pem --> /usr/share/ca-certificates/123742.pem (1708 bytes)
	I1210 06:23:29.777382  314350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 06:23:29.799104  314350 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 06:23:29.814888  314350 ssh_runner.go:195] Run: openssl version
	I1210 06:23:29.822488  314350 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:23:29.833391  314350 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 06:23:29.843849  314350 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:23:29.848830  314350 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:23:29.848889  314350 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:23:29.900332  314350 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 06:23:29.910796  314350 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 06:23:29.921419  314350 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12374.pem
	I1210 06:23:29.931868  314350 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12374.pem /etc/ssl/certs/12374.pem
	I1210 06:23:29.940863  314350 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12374.pem
	I1210 06:23:29.945853  314350 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:52 /usr/share/ca-certificates/12374.pem
	I1210 06:23:29.945917  314350 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12374.pem
	I1210 06:23:29.990790  314350 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 06:23:30.001423  314350 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/12374.pem /etc/ssl/certs/51391683.0
	I1210 06:23:30.012093  314350 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/123742.pem
	I1210 06:23:30.021966  314350 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/123742.pem /etc/ssl/certs/123742.pem
	I1210 06:23:30.031681  314350 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/123742.pem
	I1210 06:23:30.036587  314350 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:52 /usr/share/ca-certificates/123742.pem
	I1210 06:23:30.036647  314350 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/123742.pem
	I1210 06:23:30.079836  314350 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 06:23:30.088727  314350 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/123742.pem /etc/ssl/certs/3ec20f2e.0
	I1210 06:23:30.098655  314350 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:23:30.103183  314350 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 06:23:30.103247  314350 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-643991 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-643991 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:23:30.103333  314350 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 06:23:30.103395  314350 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:23:30.136927  314350 cri.go:89] found id: ""
	I1210 06:23:30.137007  314350 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 06:23:30.148241  314350 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 06:23:30.158531  314350 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 06:23:30.158592  314350 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 06:23:30.167324  314350 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 06:23:30.167351  314350 kubeadm.go:158] found existing configuration files:
	
	I1210 06:23:30.167399  314350 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1210 06:23:30.175810  314350 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 06:23:30.175861  314350 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 06:23:30.183624  314350 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1210 06:23:30.192017  314350 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 06:23:30.192081  314350 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 06:23:30.200032  314350 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1210 06:23:30.208513  314350 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 06:23:30.208577  314350 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 06:23:30.216939  314350 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1210 06:23:30.225513  314350 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 06:23:30.225577  314350 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 06:23:30.233491  314350 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 06:23:30.272269  314350 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1210 06:23:30.272359  314350 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 06:23:30.293334  314350 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 06:23:30.293464  314350 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1210 06:23:30.293554  314350 kubeadm.go:319] OS: Linux
	I1210 06:23:30.293609  314350 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 06:23:30.293690  314350 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 06:23:30.293789  314350 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 06:23:30.293862  314350 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 06:23:30.293945  314350 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 06:23:30.294004  314350 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 06:23:30.294077  314350 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 06:23:30.294138  314350 kubeadm.go:319] CGROUPS_IO: enabled
	I1210 06:23:30.355133  314350 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 06:23:30.355323  314350 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 06:23:30.355496  314350 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 06:23:30.362536  314350 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 06:23:27.418389  303393 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1210 06:23:27.424675  303393 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl ...
	I1210 06:23:27.424700  303393 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1210 06:23:27.440460  303393 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1210 06:23:27.721703  303393 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 06:23:27.721811  303393 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:23:27.721875  303393 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-713838 minikube.k8s.io/updated_at=2025_12_10T06_23_27_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=edc6abd3c0573b88c7a02dc35aa0b985627fa3e9 minikube.k8s.io/name=no-preload-713838 minikube.k8s.io/primary=true
	I1210 06:23:27.736242  303393 ops.go:34] apiserver oom_adj: -16
	I1210 06:23:27.844275  303393 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:23:28.344420  303393 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:23:28.844821  303393 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:23:29.345360  303393 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:23:29.844656  303393 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:23:30.345197  303393 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:23:30.364935  314350 out.go:252]   - Generating certificates and keys ...
	I1210 06:23:30.365050  314350 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 06:23:30.365151  314350 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 06:23:30.690254  314350 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 06:23:30.900171  314350 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 06:23:31.488764  309386 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1210 06:23:31.488853  309386 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 06:23:31.489191  309386 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 06:23:31.489279  309386 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1210 06:23:31.489342  309386 kubeadm.go:319] OS: Linux
	I1210 06:23:31.489410  309386 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 06:23:31.489569  309386 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 06:23:31.489659  309386 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 06:23:31.489740  309386 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 06:23:31.489805  309386 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 06:23:31.489877  309386 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 06:23:31.489956  309386 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 06:23:31.490033  309386 kubeadm.go:319] CGROUPS_IO: enabled
	I1210 06:23:31.490139  309386 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 06:23:31.490267  309386 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 06:23:31.490389  309386 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 06:23:31.490494  309386 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 06:23:31.493078  309386 out.go:252]   - Generating certificates and keys ...
	I1210 06:23:31.493173  309386 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 06:23:31.493278  309386 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 06:23:31.493380  309386 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 06:23:31.493441  309386 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 06:23:31.493535  309386 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 06:23:31.493603  309386 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 06:23:31.493669  309386 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 06:23:31.493823  309386 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-133470 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1210 06:23:31.493924  309386 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 06:23:31.494053  309386 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-133470 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1210 06:23:31.494161  309386 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 06:23:31.494256  309386 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 06:23:31.494324  309386 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 06:23:31.494401  309386 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 06:23:31.494519  309386 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 06:23:31.494588  309386 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 06:23:31.494635  309386 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 06:23:31.494719  309386 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 06:23:31.494794  309386 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 06:23:31.494926  309386 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 06:23:31.495031  309386 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 06:23:31.498893  309386 out.go:252]   - Booting up control plane ...
	I1210 06:23:31.498998  309386 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 06:23:31.499121  309386 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 06:23:31.499220  309386 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 06:23:31.499383  309386 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 06:23:31.499545  309386 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 06:23:31.499692  309386 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 06:23:31.499795  309386 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 06:23:31.499829  309386 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 06:23:31.500034  309386 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 06:23:31.500208  309386 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 06:23:31.500267  309386 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001018807s
	I1210 06:23:31.500394  309386 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1210 06:23:31.500553  309386 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1210 06:23:31.500677  309386 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1210 06:23:31.500818  309386 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1210 06:23:31.500945  309386 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.429136687s
	I1210 06:23:31.501047  309386 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.842296086s
	I1210 06:23:31.501142  309386 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.501847468s
	I1210 06:23:31.501327  309386 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 06:23:31.501521  309386 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 06:23:31.501607  309386 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 06:23:31.501900  309386 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-133470 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 06:23:31.501997  309386 kubeadm.go:319] [bootstrap-token] Using token: x0568o.jtmiamtizoylkdeb
	I1210 06:23:31.503615  309386 out.go:252]   - Configuring RBAC rules ...
	I1210 06:23:31.503802  309386 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 06:23:31.503929  309386 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 06:23:31.504109  309386 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 06:23:31.504312  309386 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 06:23:31.504517  309386 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 06:23:31.504631  309386 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 06:23:31.504777  309386 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 06:23:31.504847  309386 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1210 06:23:31.504921  309386 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1210 06:23:31.504933  309386 kubeadm.go:319] 
	I1210 06:23:31.505022  309386 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1210 06:23:31.505030  309386 kubeadm.go:319] 
	I1210 06:23:31.505134  309386 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1210 06:23:31.505142  309386 kubeadm.go:319] 
	I1210 06:23:31.505179  309386 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1210 06:23:31.505279  309386 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 06:23:31.505341  309386 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 06:23:31.505349  309386 kubeadm.go:319] 
	I1210 06:23:31.505421  309386 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1210 06:23:31.505430  309386 kubeadm.go:319] 
	I1210 06:23:31.505533  309386 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 06:23:31.505542  309386 kubeadm.go:319] 
	I1210 06:23:31.505617  309386 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1210 06:23:31.505713  309386 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 06:23:31.505773  309386 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 06:23:31.505779  309386 kubeadm.go:319] 
	I1210 06:23:31.505849  309386 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 06:23:31.505966  309386 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1210 06:23:31.505976  309386 kubeadm.go:319] 
	I1210 06:23:31.506104  309386 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token x0568o.jtmiamtizoylkdeb \
	I1210 06:23:31.506263  309386 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:63e262019a0228173b835d7feaf739daf8c2f986042fc20415163ebad5fe89a5 \
	I1210 06:23:31.506305  309386 kubeadm.go:319] 	--control-plane 
	I1210 06:23:31.506321  309386 kubeadm.go:319] 
	I1210 06:23:31.506436  309386 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1210 06:23:31.506445  309386 kubeadm.go:319] 
	I1210 06:23:31.506555  309386 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token x0568o.jtmiamtizoylkdeb \
	I1210 06:23:31.506736  309386 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:63e262019a0228173b835d7feaf739daf8c2f986042fc20415163ebad5fe89a5 
	I1210 06:23:31.506754  309386 cni.go:84] Creating CNI manager for ""
	I1210 06:23:31.506762  309386 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:23:31.508603  309386 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1210 06:23:30.845067  303393 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:23:31.344399  303393 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:23:31.844740  303393 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:23:32.345227  303393 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:23:32.415452  303393 kubeadm.go:1114] duration metric: took 4.693685143s to wait for elevateKubeSystemPrivileges
	I1210 06:23:32.415510  303393 kubeadm.go:403] duration metric: took 13.835490798s to StartCluster
	I1210 06:23:32.415534  303393 settings.go:142] acquiring lock: {Name:mkcfa52e2e09cf8266d26c2d1d1f162454a79515 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:23:32.415607  303393 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22089-8832/kubeconfig
	I1210 06:23:32.416548  303393 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/kubeconfig: {Name:mk2d0febd8c6a30a71f02d20e2057fd6d147cd76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:23:32.416798  303393 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 06:23:32.416903  303393 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 06:23:32.416997  303393 addons.go:70] Setting storage-provisioner=true in profile "no-preload-713838"
	I1210 06:23:32.417013  303393 addons.go:239] Setting addon storage-provisioner=true in "no-preload-713838"
	I1210 06:23:32.417058  303393 host.go:66] Checking if "no-preload-713838" exists ...
	I1210 06:23:32.417117  303393 config.go:182] Loaded profile config "no-preload-713838": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 06:23:32.417180  303393 addons.go:70] Setting default-storageclass=true in profile "no-preload-713838"
	I1210 06:23:32.417194  303393 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-713838"
	I1210 06:23:32.417574  303393 cli_runner.go:164] Run: docker container inspect no-preload-713838 --format={{.State.Status}}
	I1210 06:23:32.417642  303393 cli_runner.go:164] Run: docker container inspect no-preload-713838 --format={{.State.Status}}
	I1210 06:23:32.416868  303393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1210 06:23:32.420815  303393 out.go:179] * Verifying Kubernetes components...
	I1210 06:23:32.422342  303393 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:23:32.448381  303393 addons.go:239] Setting addon default-storageclass=true in "no-preload-713838"
	I1210 06:23:32.448436  303393 host.go:66] Checking if "no-preload-713838" exists ...
	I1210 06:23:32.449027  303393 cli_runner.go:164] Run: docker container inspect no-preload-713838 --format={{.State.Status}}
	I1210 06:23:32.459457  303393 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:23:32.460854  303393 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:23:32.460878  303393 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 06:23:32.460946  303393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-713838
	I1210 06:23:32.475184  303393 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 06:23:32.475208  303393 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 06:23:32.475285  303393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-713838
	I1210 06:23:32.494513  303393 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/no-preload-713838/id_rsa Username:docker}
	I1210 06:23:32.509437  303393 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/no-preload-713838/id_rsa Username:docker}
	I1210 06:23:32.542415  303393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1210 06:23:32.607771  303393 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:23:32.628024  303393 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:23:32.632623  303393 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:23:32.757199  303393 node_ready.go:35] waiting up to 6m0s for node "no-preload-713838" to be "Ready" ...
	I1210 06:23:32.758359  303393 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1210 06:23:33.103155  303393 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1210 06:23:31.510116  309386 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1210 06:23:31.514763  309386 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1210 06:23:31.514787  309386 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1210 06:23:31.530684  309386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1210 06:23:31.808298  309386 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 06:23:31.808384  309386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:23:31.808416  309386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-133470 minikube.k8s.io/updated_at=2025_12_10T06_23_31_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=edc6abd3c0573b88c7a02dc35aa0b985627fa3e9 minikube.k8s.io/name=embed-certs-133470 minikube.k8s.io/primary=true
	I1210 06:23:31.909678  309386 ops.go:34] apiserver oom_adj: -16
	I1210 06:23:31.909803  309386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:23:32.410137  309386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:23:32.909942  309386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:23:33.410077  309386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:23:33.910222  309386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:23:31.222618  314350 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 06:23:31.294493  314350 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 06:23:31.565791  314350 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 06:23:31.565982  314350 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-643991 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1210 06:23:31.731565  314350 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 06:23:31.731743  314350 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-643991 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1210 06:23:32.194035  314350 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 06:23:32.597944  314350 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 06:23:32.941731  314350 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 06:23:32.943005  314350 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 06:23:33.402620  314350 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 06:23:33.849951  314350 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 06:23:34.171117  314350 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 06:23:34.282956  314350 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 06:23:34.523921  314350 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 06:23:34.524559  314350 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 06:23:34.528736  314350 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 06:23:33.104553  303393 addons.go:530] duration metric: took 687.645425ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1210 06:23:33.270392  303393 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-713838" context rescaled to 1 replicas
	W1210 06:23:34.761012  303393 node_ready.go:57] node "no-preload-713838" has "Ready":"False" status (will retry)
	I1210 06:23:34.533204  314350 out.go:252]   - Booting up control plane ...
	I1210 06:23:34.533351  314350 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 06:23:34.533446  314350 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 06:23:34.533534  314350 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 06:23:34.549392  314350 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 06:23:34.549553  314350 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 06:23:34.557142  314350 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 06:23:34.557427  314350 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 06:23:34.557503  314350 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 06:23:34.665343  314350 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 06:23:34.665543  314350 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 06:23:35.167206  314350 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 502.034566ms
	I1210 06:23:35.170232  314350 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1210 06:23:35.170389  314350 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8444/livez
	I1210 06:23:35.170557  314350 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1210 06:23:35.170708  314350 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1210 06:23:34.410194  309386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:23:34.910598  309386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:23:35.410668  309386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:23:35.910187  309386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:23:36.016917  309386 kubeadm.go:1114] duration metric: took 4.208597989s to wait for elevateKubeSystemPrivileges
	I1210 06:23:36.017006  309386 kubeadm.go:403] duration metric: took 16.826689255s to StartCluster
	I1210 06:23:36.017037  309386 settings.go:142] acquiring lock: {Name:mkcfa52e2e09cf8266d26c2d1d1f162454a79515 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:23:36.017129  309386 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22089-8832/kubeconfig
	I1210 06:23:36.019683  309386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/kubeconfig: {Name:mk2d0febd8c6a30a71f02d20e2057fd6d147cd76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:23:36.019969  309386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1210 06:23:36.019988  309386 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 06:23:36.020058  309386 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-133470"
	I1210 06:23:36.020076  309386 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-133470"
	I1210 06:23:36.020099  309386 host.go:66] Checking if "embed-certs-133470" exists ...
	I1210 06:23:36.020169  309386 config.go:182] Loaded profile config "embed-certs-133470": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 06:23:36.020214  309386 addons.go:70] Setting default-storageclass=true in profile "embed-certs-133470"
	I1210 06:23:36.019969  309386 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 06:23:36.020234  309386 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-133470"
	I1210 06:23:36.020585  309386 cli_runner.go:164] Run: docker container inspect embed-certs-133470 --format={{.State.Status}}
	I1210 06:23:36.020647  309386 cli_runner.go:164] Run: docker container inspect embed-certs-133470 --format={{.State.Status}}
	I1210 06:23:36.024078  309386 out.go:179] * Verifying Kubernetes components...
	I1210 06:23:36.025918  309386 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:23:36.056460  309386 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	
	
	==> CRI-O <==
	Dec 10 06:23:24 old-k8s-version-424086 crio[781]: time="2025-12-10T06:23:24.483666836Z" level=info msg="Starting container: 9681bdd62873816b9af5a225be3b7272b861fae3e17abe7db158f7df0f2e2e56" id=3b3a9061-866f-40fe-8e27-cddfabe02edf name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 06:23:24 old-k8s-version-424086 crio[781]: time="2025-12-10T06:23:24.488889103Z" level=info msg="Started container" PID=2152 containerID=9681bdd62873816b9af5a225be3b7272b861fae3e17abe7db158f7df0f2e2e56 description=kube-system/coredns-5dd5756b68-gmssk/coredns id=3b3a9061-866f-40fe-8e27-cddfabe02edf name=/runtime.v1.RuntimeService/StartContainer sandboxID=3a2df6316d21a3ddef1daa782235c9c875c8fba46298a781b803cab252bbbfd4
	Dec 10 06:23:27 old-k8s-version-424086 crio[781]: time="2025-12-10T06:23:27.540996604Z" level=info msg="Running pod sandbox: default/busybox/POD" id=66e29a17-7635-4969-ada6-5b84868dafdc name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 06:23:27 old-k8s-version-424086 crio[781]: time="2025-12-10T06:23:27.541090183Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:23:27 old-k8s-version-424086 crio[781]: time="2025-12-10T06:23:27.547326189Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:6ffcca0ca939effbbf9c5d5b45e2ceffa86df33c57b586b0e55d7150a4dc234d UID:a4d86c19-5b30-46da-bcf1-505d9e0c52a3 NetNS:/var/run/netns/54472c25-ad82-48a2-a5a7-82d8d4c2936c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008b498}] Aliases:map[]}"
	Dec 10 06:23:27 old-k8s-version-424086 crio[781]: time="2025-12-10T06:23:27.547364881Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 10 06:23:27 old-k8s-version-424086 crio[781]: time="2025-12-10T06:23:27.562196156Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:6ffcca0ca939effbbf9c5d5b45e2ceffa86df33c57b586b0e55d7150a4dc234d UID:a4d86c19-5b30-46da-bcf1-505d9e0c52a3 NetNS:/var/run/netns/54472c25-ad82-48a2-a5a7-82d8d4c2936c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008b498}] Aliases:map[]}"
	Dec 10 06:23:27 old-k8s-version-424086 crio[781]: time="2025-12-10T06:23:27.562374156Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 10 06:23:27 old-k8s-version-424086 crio[781]: time="2025-12-10T06:23:27.563356896Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 10 06:23:27 old-k8s-version-424086 crio[781]: time="2025-12-10T06:23:27.564683206Z" level=info msg="Ran pod sandbox 6ffcca0ca939effbbf9c5d5b45e2ceffa86df33c57b586b0e55d7150a4dc234d with infra container: default/busybox/POD" id=66e29a17-7635-4969-ada6-5b84868dafdc name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 06:23:27 old-k8s-version-424086 crio[781]: time="2025-12-10T06:23:27.566544054Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=7df77ffa-fa17-41d2-81d2-bb83f4801812 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:23:27 old-k8s-version-424086 crio[781]: time="2025-12-10T06:23:27.566686096Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=7df77ffa-fa17-41d2-81d2-bb83f4801812 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:23:27 old-k8s-version-424086 crio[781]: time="2025-12-10T06:23:27.566736597Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=7df77ffa-fa17-41d2-81d2-bb83f4801812 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:23:27 old-k8s-version-424086 crio[781]: time="2025-12-10T06:23:27.569602709Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=aea03299-5f00-4218-8836-39014ecd0f49 name=/runtime.v1.ImageService/PullImage
	Dec 10 06:23:27 old-k8s-version-424086 crio[781]: time="2025-12-10T06:23:27.572276837Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 10 06:23:28 old-k8s-version-424086 crio[781]: time="2025-12-10T06:23:28.903131197Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=aea03299-5f00-4218-8836-39014ecd0f49 name=/runtime.v1.ImageService/PullImage
	Dec 10 06:23:28 old-k8s-version-424086 crio[781]: time="2025-12-10T06:23:28.904334628Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c766521e-1b76-4b47-95e3-b454f66aa736 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:23:28 old-k8s-version-424086 crio[781]: time="2025-12-10T06:23:28.906211901Z" level=info msg="Creating container: default/busybox/busybox" id=7a207a5d-6f9d-4f90-a908-29083ecec666 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:23:28 old-k8s-version-424086 crio[781]: time="2025-12-10T06:23:28.906353804Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:23:28 old-k8s-version-424086 crio[781]: time="2025-12-10T06:23:28.911864348Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:23:28 old-k8s-version-424086 crio[781]: time="2025-12-10T06:23:28.912464241Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:23:28 old-k8s-version-424086 crio[781]: time="2025-12-10T06:23:28.954456069Z" level=info msg="Created container e072ea9170f5d67cb72dfaec40828bf85c26b0b96453a25238cf67b4ce53e19c: default/busybox/busybox" id=7a207a5d-6f9d-4f90-a908-29083ecec666 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:23:28 old-k8s-version-424086 crio[781]: time="2025-12-10T06:23:28.955241689Z" level=info msg="Starting container: e072ea9170f5d67cb72dfaec40828bf85c26b0b96453a25238cf67b4ce53e19c" id=aa792be0-157b-499e-9315-7239af78d101 name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 06:23:28 old-k8s-version-424086 crio[781]: time="2025-12-10T06:23:28.957658411Z" level=info msg="Started container" PID=2223 containerID=e072ea9170f5d67cb72dfaec40828bf85c26b0b96453a25238cf67b4ce53e19c description=default/busybox/busybox id=aa792be0-157b-499e-9315-7239af78d101 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6ffcca0ca939effbbf9c5d5b45e2ceffa86df33c57b586b0e55d7150a4dc234d
	Dec 10 06:23:35 old-k8s-version-424086 crio[781]: time="2025-12-10T06:23:35.333395942Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	e072ea9170f5d       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   6ffcca0ca939e       busybox                                          default
	9681bdd628738       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      12 seconds ago      Running             coredns                   0                   3a2df6316d21a       coredns-5dd5756b68-gmssk                         kube-system
	4c2066451470e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   9e2ddf33db0ee       storage-provisioner                              kube-system
	fb7282561bc86       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    23 seconds ago      Running             kindnet-cni               0                   b1767fed1819e       kindnet-2qg8n                                    kube-system
	f099550645d76       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                      25 seconds ago      Running             kube-proxy                0                   f84717fa12bd5       kube-proxy-v9pgf                                 kube-system
	3c66aa47b9d3d       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      44 seconds ago      Running             etcd                      0                   92c3dbb671af7       etcd-old-k8s-version-424086                      kube-system
	df32b6744b0c9       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                      44 seconds ago      Running             kube-controller-manager   0                   7665bd2f97e3b       kube-controller-manager-old-k8s-version-424086   kube-system
	c92c635b2f36f       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                      44 seconds ago      Running             kube-apiserver            0                   5369feb14cc25       kube-apiserver-old-k8s-version-424086            kube-system
	c7f5cfe89ce92       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                      44 seconds ago      Running             kube-scheduler            0                   fb9a7ba50d5d4       kube-scheduler-old-k8s-version-424086            kube-system
	
	
	==> coredns [9681bdd62873816b9af5a225be3b7272b861fae3e17abe7db158f7df0f2e2e56] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:43942 - 34645 "HINFO IN 7063118850682888184.2158948493825688295. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.031781204s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-424086
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-424086
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=edc6abd3c0573b88c7a02dc35aa0b985627fa3e9
	                    minikube.k8s.io/name=old-k8s-version-424086
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T06_22_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 06:22:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-424086
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 06:23:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 06:23:28 +0000   Wed, 10 Dec 2025 06:22:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 06:23:28 +0000   Wed, 10 Dec 2025 06:22:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 06:23:28 +0000   Wed, 10 Dec 2025 06:22:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Dec 2025 06:23:28 +0000   Wed, 10 Dec 2025 06:23:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-424086
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 0992b7e47f4f804d2f02c3066938a460
	  System UUID:                e81e4360-349b-45ab-b112-f9ed8c9c5eab
	  Boot ID:                    cce7104c-1270-4b6b-af66-b04ce0de633c
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-5dd5756b68-gmssk                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-old-k8s-version-424086                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         39s
	  kube-system                 kindnet-2qg8n                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-old-k8s-version-424086             250m (3%)     0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-controller-manager-old-k8s-version-424086    200m (2%)     0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-proxy-v9pgf                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-old-k8s-version-424086             100m (1%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 25s                kube-proxy       
	  Normal  Starting                 45s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  45s (x8 over 45s)  kubelet          Node old-k8s-version-424086 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    45s (x8 over 45s)  kubelet          Node old-k8s-version-424086 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     45s (x8 over 45s)  kubelet          Node old-k8s-version-424086 status is now: NodeHasSufficientPID
	  Normal  Starting                 40s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  39s                kubelet          Node old-k8s-version-424086 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    39s                kubelet          Node old-k8s-version-424086 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     39s                kubelet          Node old-k8s-version-424086 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27s                node-controller  Node old-k8s-version-424086 event: Registered Node old-k8s-version-424086 in Controller
	  Normal  NodeReady                13s                kubelet          Node old-k8s-version-424086 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 67 21 4d 5d 12 08 06
	[Dec10 06:21] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 7e b1 cc cb 4a c1 08 06
	[  +0.000451] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f6 67 21 4d 5d 12 08 06
	[ +47.984386] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 4e cf 84 e2 64 08 06
	[  +1.136322] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000001] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0e cf a5 c8 c4 7c 08 06
	[  +0.000001] ll header: 00000000: ff ff ff ff ff ff 4e 6d f9 62 31 99 08 06
	[Dec10 06:22] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 16 a6 95 b8 c2 8a 08 06
	[ +10.598490] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 42 35 90 e5 6e e9 08 06
	[  +0.000401] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 36 4e cf 84 e2 64 08 06
	[ +28.872835] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a 53 b5 51 38 03 08 06
	[  +0.000413] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4e 6d f9 62 31 99 08 06
	[  +9.820727] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6e c5 0b 85 ba 10 08 06
	[  +0.000485] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 16 a6 95 b8 c2 8a 08 06
	
	
	==> etcd [3c66aa47b9d3d123ff89c6f45702b808fdb4fc14b4267bd969c6c15818ec9a7c] <==
	{"level":"info","ts":"2025-12-10T06:22:53.55481Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2025-12-10T06:22:53.554839Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2025-12-10T06:22:53.554855Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2025-12-10T06:22:53.55486Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-10T06:22:53.554868Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-12-10T06:22:53.554876Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-10T06:22:53.555786Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-10T06:22:53.556449Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-10T06:22:53.556483Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-10T06:22:53.556456Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-424086 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-10T06:22:53.556772Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-10T06:22:53.557234Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-10T06:22:53.557748Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-10T06:22:53.558412Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-12-10T06:22:53.558431Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-10T06:22:53.558502Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-10T06:22:53.55931Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-10T06:23:08.21551Z","caller":"traceutil/trace.go:171","msg":"trace[1408287473] transaction","detail":"{read_only:false; response_revision:307; number_of_response:1; }","duration":"118.678037ms","start":"2025-12-10T06:23:08.096808Z","end":"2025-12-10T06:23:08.215486Z","steps":["trace[1408287473] 'process raft request'  (duration: 118.532351ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T06:23:08.280694Z","caller":"traceutil/trace.go:171","msg":"trace[1110499453] transaction","detail":"{read_only:false; response_revision:308; number_of_response:1; }","duration":"169.070561ms","start":"2025-12-10T06:23:08.111598Z","end":"2025-12-10T06:23:08.280669Z","steps":["trace[1110499453] 'process raft request'  (duration: 161.194231ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T06:23:08.862401Z","caller":"traceutil/trace.go:171","msg":"trace[1076065854] transaction","detail":"{read_only:false; response_revision:311; number_of_response:1; }","duration":"105.432297ms","start":"2025-12-10T06:23:08.756944Z","end":"2025-12-10T06:23:08.862376Z","steps":["trace[1076065854] 'process raft request'  (duration: 105.325863ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-10T06:23:09.110135Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"121.461672ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597624615986412 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/service-account-controller\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/service-account-controller\" value_size:131 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-12-10T06:23:09.110277Z","caller":"traceutil/trace.go:171","msg":"trace[1721741050] transaction","detail":"{read_only:false; response_revision:312; number_of_response:1; }","duration":"203.281353ms","start":"2025-12-10T06:23:08.906961Z","end":"2025-12-10T06:23:09.110242Z","steps":["trace[1721741050] 'process raft request'  (duration: 81.242925ms)","trace[1721741050] 'compare'  (duration: 121.328426ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-10T06:23:09.241512Z","caller":"traceutil/trace.go:171","msg":"trace[549358225] transaction","detail":"{read_only:false; response_revision:313; number_of_response:1; }","duration":"120.397606ms","start":"2025-12-10T06:23:09.121063Z","end":"2025-12-10T06:23:09.24146Z","steps":["trace[549358225] 'process raft request'  (duration: 118.806942ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-10T06:23:09.458855Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.587305ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2025-12-10T06:23:09.458935Z","caller":"traceutil/trace.go:171","msg":"trace[671278781] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:314; }","duration":"101.714315ms","start":"2025-12-10T06:23:09.357201Z","end":"2025-12-10T06:23:09.458915Z","steps":["trace[671278781] 'range keys from in-memory index tree'  (duration: 101.435028ms)"],"step_count":1}
	
	
	==> kernel <==
	 06:23:37 up  1:06,  0 user,  load average: 5.98, 4.83, 2.83
	Linux old-k8s-version-424086 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [fb7282561bc867d963892beaa952a5107e5c50dad8da21372aff22f3b8f78746] <==
	I1210 06:23:13.591803       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1210 06:23:13.592058       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1210 06:23:13.592220       1 main.go:148] setting mtu 1500 for CNI 
	I1210 06:23:13.592241       1 main.go:178] kindnetd IP family: "ipv4"
	I1210 06:23:13.592269       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-10T06:23:13Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1210 06:23:13.888957       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1210 06:23:13.889013       1 controller.go:381] "Waiting for informer caches to sync"
	I1210 06:23:13.889411       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1210 06:23:13.987192       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1210 06:23:14.273150       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1210 06:23:14.273185       1 metrics.go:72] Registering metrics
	I1210 06:23:14.273291       1 controller.go:711] "Syncing nftables rules"
	I1210 06:23:23.896308       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1210 06:23:23.896373       1 main.go:301] handling current node
	I1210 06:23:33.891863       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1210 06:23:33.891895       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c92c635b2f36f95604bfd358e391a0be69e45661c1050634f62948fec25bec85] <==
	I1210 06:22:54.800985       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1210 06:22:54.801765       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1210 06:22:54.801801       1 aggregator.go:166] initial CRD sync complete...
	I1210 06:22:54.801809       1 autoregister_controller.go:141] Starting autoregister controller
	I1210 06:22:54.801816       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1210 06:22:54.801823       1 cache.go:39] Caches are synced for autoregister controller
	I1210 06:22:54.803407       1 controller.go:624] quota admission added evaluator for: namespaces
	E1210 06:22:54.803847       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1210 06:22:54.818821       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1210 06:22:55.006980       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1210 06:22:55.722752       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1210 06:22:55.728754       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1210 06:22:55.728781       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1210 06:22:56.335961       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1210 06:22:56.384848       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1210 06:22:56.515751       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1210 06:22:56.523857       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1210 06:22:56.525146       1 controller.go:624] quota admission added evaluator for: endpoints
	I1210 06:22:56.530445       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1210 06:22:56.769501       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1210 06:22:57.917130       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1210 06:22:57.929551       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1210 06:22:57.944752       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1210 06:23:11.418787       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1210 06:23:11.526459       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [df32b6744b0c9dec50d9d3c90963df74e9b3d2f87b37f8590099412c722283d2] <==
	I1210 06:23:11.135112       1 shared_informer.go:318] Caches are synced for garbage collector
	I1210 06:23:11.135150       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1210 06:23:11.145599       1 shared_informer.go:318] Caches are synced for garbage collector
	I1210 06:23:11.425332       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1210 06:23:11.582934       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-v9pgf"
	I1210 06:23:11.590294       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-2qg8n"
	I1210 06:23:11.643619       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-gmssk"
	I1210 06:23:11.657206       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-t92rh"
	I1210 06:23:11.681903       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="258.096557ms"
	I1210 06:23:11.703244       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="21.256323ms"
	I1210 06:23:11.703401       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="103.117µs"
	I1210 06:23:11.703731       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="98.342µs"
	I1210 06:23:11.711493       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="106.708µs"
	I1210 06:23:11.836277       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1210 06:23:11.849294       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-t92rh"
	I1210 06:23:11.866273       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="31.43274ms"
	I1210 06:23:11.900356       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="34.002108ms"
	I1210 06:23:11.900877       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="95.367µs"
	I1210 06:23:24.126715       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="141.705µs"
	I1210 06:23:24.140775       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="135.412µs"
	I1210 06:23:25.129571       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="9.173449ms"
	I1210 06:23:25.129686       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="73.753µs"
	I1210 06:23:25.788264       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1210 06:23:25.789215       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68-gmssk" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-5dd5756b68-gmssk"
	I1210 06:23:25.789258       1 event.go:307] "Event occurred" object="kube-system/storage-provisioner" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/storage-provisioner"
	
	
	==> kube-proxy [f099550645d764a088225898378f440c40600da6b60a1a265913d9c8ee412cf0] <==
	I1210 06:23:12.043766       1 server_others.go:69] "Using iptables proxy"
	I1210 06:23:12.059670       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1210 06:23:12.095787       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1210 06:23:12.099697       1 server_others.go:152] "Using iptables Proxier"
	I1210 06:23:12.099845       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1210 06:23:12.099869       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1210 06:23:12.099920       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1210 06:23:12.100253       1 server.go:846] "Version info" version="v1.28.0"
	I1210 06:23:12.100272       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 06:23:12.101075       1 config.go:188] "Starting service config controller"
	I1210 06:23:12.101109       1 config.go:97] "Starting endpoint slice config controller"
	I1210 06:23:12.101138       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1210 06:23:12.101249       1 config.go:315] "Starting node config controller"
	I1210 06:23:12.101824       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1210 06:23:12.101137       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1210 06:23:12.201618       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1210 06:23:12.202747       1 shared_informer.go:318] Caches are synced for node config
	I1210 06:23:12.202734       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [c7f5cfe89ce92115b46479b725e02c7b82514d5a4478235207f65277aebca961] <==
	W1210 06:22:54.778843       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1210 06:22:54.778883       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1210 06:22:54.778985       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1210 06:22:54.779018       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1210 06:22:55.681751       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1210 06:22:55.681790       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1210 06:22:55.737814       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1210 06:22:55.737983       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1210 06:22:55.771933       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1210 06:22:55.771976       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1210 06:22:55.794185       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1210 06:22:55.794303       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1210 06:22:55.874273       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1210 06:22:55.874313       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1210 06:22:55.931014       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1210 06:22:55.931072       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1210 06:22:55.985105       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1210 06:22:55.985143       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1210 06:22:56.079574       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1210 06:22:56.079613       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1210 06:22:56.091457       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1210 06:22:56.091529       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1210 06:22:56.091879       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1210 06:22:56.091931       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I1210 06:22:58.670408       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 10 06:23:10 old-k8s-version-424086 kubelet[1394]: I1210 06:23:10.650832    1394 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 10 06:23:11 old-k8s-version-424086 kubelet[1394]: I1210 06:23:11.612285    1394 topology_manager.go:215] "Topology Admit Handler" podUID="824adde6-eb4e-4d39-a17e-61b3d946415d" podNamespace="kube-system" podName="kube-proxy-v9pgf"
	Dec 10 06:23:11 old-k8s-version-424086 kubelet[1394]: I1210 06:23:11.615400    1394 topology_manager.go:215] "Topology Admit Handler" podUID="26a29cbd-d651-4065-a0d1-299e813902ae" podNamespace="kube-system" podName="kindnet-2qg8n"
	Dec 10 06:23:11 old-k8s-version-424086 kubelet[1394]: I1210 06:23:11.671652    1394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/824adde6-eb4e-4d39-a17e-61b3d946415d-kube-proxy\") pod \"kube-proxy-v9pgf\" (UID: \"824adde6-eb4e-4d39-a17e-61b3d946415d\") " pod="kube-system/kube-proxy-v9pgf"
	Dec 10 06:23:11 old-k8s-version-424086 kubelet[1394]: I1210 06:23:11.671718    1394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/824adde6-eb4e-4d39-a17e-61b3d946415d-xtables-lock\") pod \"kube-proxy-v9pgf\" (UID: \"824adde6-eb4e-4d39-a17e-61b3d946415d\") " pod="kube-system/kube-proxy-v9pgf"
	Dec 10 06:23:11 old-k8s-version-424086 kubelet[1394]: I1210 06:23:11.671754    1394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/26a29cbd-d651-4065-a0d1-299e813902ae-xtables-lock\") pod \"kindnet-2qg8n\" (UID: \"26a29cbd-d651-4065-a0d1-299e813902ae\") " pod="kube-system/kindnet-2qg8n"
	Dec 10 06:23:11 old-k8s-version-424086 kubelet[1394]: I1210 06:23:11.671793    1394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6ggb\" (UniqueName: \"kubernetes.io/projected/824adde6-eb4e-4d39-a17e-61b3d946415d-kube-api-access-d6ggb\") pod \"kube-proxy-v9pgf\" (UID: \"824adde6-eb4e-4d39-a17e-61b3d946415d\") " pod="kube-system/kube-proxy-v9pgf"
	Dec 10 06:23:11 old-k8s-version-424086 kubelet[1394]: I1210 06:23:11.671820    1394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/26a29cbd-d651-4065-a0d1-299e813902ae-cni-cfg\") pod \"kindnet-2qg8n\" (UID: \"26a29cbd-d651-4065-a0d1-299e813902ae\") " pod="kube-system/kindnet-2qg8n"
	Dec 10 06:23:11 old-k8s-version-424086 kubelet[1394]: I1210 06:23:11.671851    1394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/26a29cbd-d651-4065-a0d1-299e813902ae-lib-modules\") pod \"kindnet-2qg8n\" (UID: \"26a29cbd-d651-4065-a0d1-299e813902ae\") " pod="kube-system/kindnet-2qg8n"
	Dec 10 06:23:11 old-k8s-version-424086 kubelet[1394]: I1210 06:23:11.671880    1394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/824adde6-eb4e-4d39-a17e-61b3d946415d-lib-modules\") pod \"kube-proxy-v9pgf\" (UID: \"824adde6-eb4e-4d39-a17e-61b3d946415d\") " pod="kube-system/kube-proxy-v9pgf"
	Dec 10 06:23:11 old-k8s-version-424086 kubelet[1394]: I1210 06:23:11.671907    1394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpncq\" (UniqueName: \"kubernetes.io/projected/26a29cbd-d651-4065-a0d1-299e813902ae-kube-api-access-rpncq\") pod \"kindnet-2qg8n\" (UID: \"26a29cbd-d651-4065-a0d1-299e813902ae\") " pod="kube-system/kindnet-2qg8n"
	Dec 10 06:23:12 old-k8s-version-424086 kubelet[1394]: I1210 06:23:12.073330    1394 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-v9pgf" podStartSLOduration=1.073271976 podCreationTimestamp="2025-12-10 06:23:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 06:23:12.072500804 +0000 UTC m=+14.187615982" watchObservedRunningTime="2025-12-10 06:23:12.073271976 +0000 UTC m=+14.188387153"
	Dec 10 06:23:24 old-k8s-version-424086 kubelet[1394]: I1210 06:23:24.096965    1394 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Dec 10 06:23:24 old-k8s-version-424086 kubelet[1394]: I1210 06:23:24.125135    1394 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-2qg8n" podStartSLOduration=11.629652861 podCreationTimestamp="2025-12-10 06:23:11 +0000 UTC" firstStartedPulling="2025-12-10 06:23:11.929719649 +0000 UTC m=+14.044834819" lastFinishedPulling="2025-12-10 06:23:13.425144435 +0000 UTC m=+15.540259602" observedRunningTime="2025-12-10 06:23:14.072437311 +0000 UTC m=+16.187552489" watchObservedRunningTime="2025-12-10 06:23:24.125077644 +0000 UTC m=+26.240192821"
	Dec 10 06:23:24 old-k8s-version-424086 kubelet[1394]: I1210 06:23:24.125614    1394 topology_manager.go:215] "Topology Admit Handler" podUID="543e9066-3bdb-41ea-a1dc-b1295d461b67" podNamespace="kube-system" podName="coredns-5dd5756b68-gmssk"
	Dec 10 06:23:24 old-k8s-version-424086 kubelet[1394]: I1210 06:23:24.125950    1394 topology_manager.go:215] "Topology Admit Handler" podUID="6d743349-7ed7-4b69-86ac-9f45fc3c5ab9" podNamespace="kube-system" podName="storage-provisioner"
	Dec 10 06:23:24 old-k8s-version-424086 kubelet[1394]: I1210 06:23:24.160792    1394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfkcg\" (UniqueName: \"kubernetes.io/projected/6d743349-7ed7-4b69-86ac-9f45fc3c5ab9-kube-api-access-qfkcg\") pod \"storage-provisioner\" (UID: \"6d743349-7ed7-4b69-86ac-9f45fc3c5ab9\") " pod="kube-system/storage-provisioner"
	Dec 10 06:23:24 old-k8s-version-424086 kubelet[1394]: I1210 06:23:24.160921    1394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/6d743349-7ed7-4b69-86ac-9f45fc3c5ab9-tmp\") pod \"storage-provisioner\" (UID: \"6d743349-7ed7-4b69-86ac-9f45fc3c5ab9\") " pod="kube-system/storage-provisioner"
	Dec 10 06:23:24 old-k8s-version-424086 kubelet[1394]: I1210 06:23:24.160973    1394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6s4nl\" (UniqueName: \"kubernetes.io/projected/543e9066-3bdb-41ea-a1dc-b1295d461b67-kube-api-access-6s4nl\") pod \"coredns-5dd5756b68-gmssk\" (UID: \"543e9066-3bdb-41ea-a1dc-b1295d461b67\") " pod="kube-system/coredns-5dd5756b68-gmssk"
	Dec 10 06:23:24 old-k8s-version-424086 kubelet[1394]: I1210 06:23:24.161006    1394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/543e9066-3bdb-41ea-a1dc-b1295d461b67-config-volume\") pod \"coredns-5dd5756b68-gmssk\" (UID: \"543e9066-3bdb-41ea-a1dc-b1295d461b67\") " pod="kube-system/coredns-5dd5756b68-gmssk"
	Dec 10 06:23:25 old-k8s-version-424086 kubelet[1394]: I1210 06:23:25.120062    1394 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.119989619 podCreationTimestamp="2025-12-10 06:23:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 06:23:25.105361528 +0000 UTC m=+27.220476725" watchObservedRunningTime="2025-12-10 06:23:25.119989619 +0000 UTC m=+27.235104797"
	Dec 10 06:23:25 old-k8s-version-424086 kubelet[1394]: I1210 06:23:25.120591    1394 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-gmssk" podStartSLOduration=14.120540054 podCreationTimestamp="2025-12-10 06:23:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 06:23:25.117774192 +0000 UTC m=+27.232889369" watchObservedRunningTime="2025-12-10 06:23:25.120540054 +0000 UTC m=+27.235655236"
	Dec 10 06:23:27 old-k8s-version-424086 kubelet[1394]: I1210 06:23:27.238279    1394 topology_manager.go:215] "Topology Admit Handler" podUID="a4d86c19-5b30-46da-bcf1-505d9e0c52a3" podNamespace="default" podName="busybox"
	Dec 10 06:23:27 old-k8s-version-424086 kubelet[1394]: I1210 06:23:27.283073    1394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8ftc\" (UniqueName: \"kubernetes.io/projected/a4d86c19-5b30-46da-bcf1-505d9e0c52a3-kube-api-access-p8ftc\") pod \"busybox\" (UID: \"a4d86c19-5b30-46da-bcf1-505d9e0c52a3\") " pod="default/busybox"
	Dec 10 06:23:29 old-k8s-version-424086 kubelet[1394]: I1210 06:23:29.119531    1394 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=0.78311958 podCreationTimestamp="2025-12-10 06:23:27 +0000 UTC" firstStartedPulling="2025-12-10 06:23:27.567181727 +0000 UTC m=+29.682296895" lastFinishedPulling="2025-12-10 06:23:28.903511703 +0000 UTC m=+31.018626880" observedRunningTime="2025-12-10 06:23:29.119286153 +0000 UTC m=+31.234401329" watchObservedRunningTime="2025-12-10 06:23:29.119449565 +0000 UTC m=+31.234564742"
	
	
	==> storage-provisioner [4c2066451470e88b9daa933986f843b1c55a46d1d4246e03342f225d28af7740] <==
	I1210 06:23:24.493848       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1210 06:23:24.502961       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1210 06:23:24.503127       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1210 06:23:24.514096       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1210 06:23:24.514290       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-424086_e7159427-59ec-4053-82c6-700c8bd68628!
	I1210 06:23:24.514292       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4180df8f-51ab-47df-91f5-dd51db49c438", APIVersion:"v1", ResourceVersion:"432", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-424086_e7159427-59ec-4053-82c6-700c8bd68628 became leader
	I1210 06:23:24.614607       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-424086_e7159427-59ec-4053-82c6-700c8bd68628!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-424086 -n old-k8s-version-424086
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-424086 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (3.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-713838 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-713838 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (249.205102ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:23:56Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-713838 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-713838 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-713838 describe deploy/metrics-server -n kube-system: exit status 1 (60.323761ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-713838 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-713838
helpers_test.go:244: (dbg) docker inspect no-preload-713838:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4a9af4b439c2bab76cdd83fb5b3fc2cdad65b17f7ccbe3c7f3909b3e503a9bb2",
	        "Created": "2025-12-10T06:22:56.695408224Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 304211,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T06:22:56.733150125Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9dfcc37acf4d8ed51daae49d651516447e95ced4bb0b0783e8c53cb79a74f008",
	        "ResolvConfPath": "/var/lib/docker/containers/4a9af4b439c2bab76cdd83fb5b3fc2cdad65b17f7ccbe3c7f3909b3e503a9bb2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4a9af4b439c2bab76cdd83fb5b3fc2cdad65b17f7ccbe3c7f3909b3e503a9bb2/hostname",
	        "HostsPath": "/var/lib/docker/containers/4a9af4b439c2bab76cdd83fb5b3fc2cdad65b17f7ccbe3c7f3909b3e503a9bb2/hosts",
	        "LogPath": "/var/lib/docker/containers/4a9af4b439c2bab76cdd83fb5b3fc2cdad65b17f7ccbe3c7f3909b3e503a9bb2/4a9af4b439c2bab76cdd83fb5b3fc2cdad65b17f7ccbe3c7f3909b3e503a9bb2-json.log",
	        "Name": "/no-preload-713838",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-713838:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-713838",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4a9af4b439c2bab76cdd83fb5b3fc2cdad65b17f7ccbe3c7f3909b3e503a9bb2",
	                "LowerDir": "/var/lib/docker/overlay2/6547a92011e88654ac2d53d62edbbe331cd1387dcdf27af48e639e84ea20cdad-init/diff:/var/lib/docker/overlay2/5745aee6e8b05b3a4cc4ad6aee891df9d6438d830895f70bd2a764a976802708/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6547a92011e88654ac2d53d62edbbe331cd1387dcdf27af48e639e84ea20cdad/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6547a92011e88654ac2d53d62edbbe331cd1387dcdf27af48e639e84ea20cdad/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6547a92011e88654ac2d53d62edbbe331cd1387dcdf27af48e639e84ea20cdad/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-713838",
	                "Source": "/var/lib/docker/volumes/no-preload-713838/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-713838",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-713838",
	                "name.minikube.sigs.k8s.io": "no-preload-713838",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "d18196153628c43be0afddde3d56491edada804a369e54551252320e59e7fe4e",
	            "SandboxKey": "/var/run/docker/netns/d18196153628",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-713838": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8987097bf8a19a968989f80c7ad4a35d96813c7e6580ac101cba37c806b19e54",
	                    "EndpointID": "c7c620f938aa968630039e3e29f1a9a912bd57f1a984ed7c1c19e3ea09ac878a",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "96:b2:6b:a7:db:43",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-713838",
	                        "4a9af4b439c2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-713838 -n no-preload-713838
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-713838 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-713838 logs -n 25: (1.109790153s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-201263 sudo systemctl cat docker --no-pager                                                                                                                                                                                         │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ ssh     │ -p bridge-201263 sudo cat /etc/docker/daemon.json                                                                                                                                                                                             │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ ssh     │ -p bridge-201263 sudo docker system info                                                                                                                                                                                                      │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ ssh     │ -p bridge-201263 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                     │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ ssh     │ -p bridge-201263 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ ssh     │ -p bridge-201263 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ ssh     │ -p bridge-201263 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ ssh     │ -p bridge-201263 sudo cri-dockerd --version                                                                                                                                                                                                   │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ ssh     │ -p bridge-201263 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ ssh     │ -p bridge-201263 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ ssh     │ -p bridge-201263 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ ssh     │ -p bridge-201263 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ ssh     │ -p bridge-201263 sudo containerd config dump                                                                                                                                                                                                  │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ ssh     │ -p bridge-201263 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ ssh     │ -p bridge-201263 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ ssh     │ -p bridge-201263 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ ssh     │ -p bridge-201263 sudo crio config                                                                                                                                                                                                             │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ delete  │ -p bridge-201263                                                                                                                                                                                                                              │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ delete  │ -p disable-driver-mounts-998062                                                                                                                                                                                                               │ disable-driver-mounts-998062 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ start   │ -p default-k8s-diff-port-643991 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-643991 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-424086 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-424086       │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ stop    │ -p old-k8s-version-424086 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-424086       │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-424086 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-424086       │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ start   │ -p old-k8s-version-424086 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-424086       │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-713838 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-713838            │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:23:54
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:23:54.849893  321295 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:23:54.850002  321295 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:23:54.850014  321295 out.go:374] Setting ErrFile to fd 2...
	I1210 06:23:54.850023  321295 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:23:54.850244  321295 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8832/.minikube/bin
	I1210 06:23:54.850736  321295 out.go:368] Setting JSON to false
	I1210 06:23:54.852093  321295 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3986,"bootTime":1765343849,"procs":314,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 06:23:54.852152  321295 start.go:143] virtualization: kvm guest
	I1210 06:23:54.854159  321295 out.go:179] * [old-k8s-version-424086] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 06:23:54.855481  321295 notify.go:221] Checking for updates...
	I1210 06:23:54.855500  321295 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 06:23:54.857046  321295 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:23:54.858568  321295 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-8832/kubeconfig
	I1210 06:23:54.860006  321295 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-8832/.minikube
	I1210 06:23:54.861357  321295 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 06:23:54.863080  321295 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:23:54.864997  321295 config.go:182] Loaded profile config "old-k8s-version-424086": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1210 06:23:54.867097  321295 out.go:179] * Kubernetes 1.34.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.2
	I1210 06:23:54.868348  321295 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:23:54.894028  321295 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 06:23:54.894194  321295 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:23:54.951594  321295 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-10 06:23:54.940554655 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:23:54.951691  321295 docker.go:319] overlay module found
	I1210 06:23:54.953783  321295 out.go:179] * Using the docker driver based on existing profile
	I1210 06:23:54.954954  321295 start.go:309] selected driver: docker
	I1210 06:23:54.954968  321295 start.go:927] validating driver "docker" against &{Name:old-k8s-version-424086 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-424086 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:23:54.955066  321295 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:23:54.955686  321295 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:23:55.016148  321295 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-10 06:23:55.004664726 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:23:55.016429  321295 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 06:23:55.016454  321295 cni.go:84] Creating CNI manager for ""
	I1210 06:23:55.016535  321295 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:23:55.016576  321295 start.go:353] cluster config:
	{Name:old-k8s-version-424086 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-424086 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:23:55.018624  321295 out.go:179] * Starting "old-k8s-version-424086" primary control-plane node in "old-k8s-version-424086" cluster
	I1210 06:23:55.019931  321295 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 06:23:55.021367  321295 out.go:179] * Pulling base image v0.0.48-1765319469-22089 ...
	I1210 06:23:55.022643  321295 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1210 06:23:55.022679  321295 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-8832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1210 06:23:55.022689  321295 cache.go:65] Caching tarball of preloaded images
	I1210 06:23:55.022747  321295 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon
	I1210 06:23:55.022805  321295 preload.go:238] Found /home/jenkins/minikube-integration/22089-8832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 06:23:55.022823  321295 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1210 06:23:55.022932  321295 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/old-k8s-version-424086/config.json ...
	I1210 06:23:55.045444  321295 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon, skipping pull
	I1210 06:23:55.045478  321295 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca exists in daemon, skipping load
	I1210 06:23:55.045501  321295 cache.go:243] Successfully downloaded all kic artifacts
	I1210 06:23:55.045535  321295 start.go:360] acquireMachinesLock for old-k8s-version-424086: {Name:mk21a5d7b5b879531809d880eb98ef4b6572dda2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:23:55.045600  321295 start.go:364] duration metric: took 44.502µs to acquireMachinesLock for "old-k8s-version-424086"
	I1210 06:23:55.045623  321295 start.go:96] Skipping create...Using existing machine configuration
	I1210 06:23:55.045633  321295 fix.go:54] fixHost starting: 
	I1210 06:23:55.045840  321295 cli_runner.go:164] Run: docker container inspect old-k8s-version-424086 --format={{.State.Status}}
	I1210 06:23:55.065163  321295 fix.go:112] recreateIfNeeded on old-k8s-version-424086: state=Stopped err=<nil>
	W1210 06:23:55.065245  321295 fix.go:138] unexpected machine state, will restart: <nil>
	W1210 06:23:53.247274  314350 node_ready.go:57] node "default-k8s-diff-port-643991" has "Ready":"False" status (will retry)
	W1210 06:23:55.247865  314350 node_ready.go:57] node "default-k8s-diff-port-643991" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Dec 10 06:23:45 no-preload-713838 crio[771]: time="2025-12-10T06:23:45.487541218Z" level=info msg="Started container" PID=2792 containerID=c9c5bb3f78f042416b5672831acb03709ef48cbc6ae7144ea56a73334c7e9242 description=kube-system/coredns-7d764666f9-hr4gk/coredns id=04b816fe-479e-48aa-a190-68129cff4616 name=/runtime.v1.RuntimeService/StartContainer sandboxID=88105544bb1dba758c2aecaf73226bc9acb387f3a077268ab6cff38a72765ce7
	Dec 10 06:23:45 no-preload-713838 crio[771]: time="2025-12-10T06:23:45.488381205Z" level=info msg="Started container" PID=2791 containerID=1e3fa1bb65f532fc9190c9f7f328924b9a8c9a24721724eaad70330611dba8d3 description=kube-system/storage-provisioner/storage-provisioner id=599c26f2-42f2-4d44-a4f3-b7203a1adf68 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ed3e523f412cbacababdaf6c17eb9630af1a935778e4fad0db99b71ebb63ba91
	Dec 10 06:23:48 no-preload-713838 crio[771]: time="2025-12-10T06:23:48.899708223Z" level=info msg="Running pod sandbox: default/busybox/POD" id=07748889-98e6-4ab2-8fc9-80835eeb07ad name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 06:23:48 no-preload-713838 crio[771]: time="2025-12-10T06:23:48.899814774Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:23:48 no-preload-713838 crio[771]: time="2025-12-10T06:23:48.905360118Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:351098c35a411a22ec48a8f30da238347b5a446f3ae5ff5f9565469b6ef294e6 UID:da64c2c1-2faf-4ff5-9b06-95db44ebc605 NetNS:/var/run/netns/cf63fa18-95f2-4ed3-b9c7-1955c3c1aab3 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008b048}] Aliases:map[]}"
	Dec 10 06:23:48 no-preload-713838 crio[771]: time="2025-12-10T06:23:48.90540786Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 10 06:23:48 no-preload-713838 crio[771]: time="2025-12-10T06:23:48.915848411Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:351098c35a411a22ec48a8f30da238347b5a446f3ae5ff5f9565469b6ef294e6 UID:da64c2c1-2faf-4ff5-9b06-95db44ebc605 NetNS:/var/run/netns/cf63fa18-95f2-4ed3-b9c7-1955c3c1aab3 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008b048}] Aliases:map[]}"
	Dec 10 06:23:48 no-preload-713838 crio[771]: time="2025-12-10T06:23:48.915968878Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 10 06:23:48 no-preload-713838 crio[771]: time="2025-12-10T06:23:48.916842139Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 10 06:23:48 no-preload-713838 crio[771]: time="2025-12-10T06:23:48.917947301Z" level=info msg="Ran pod sandbox 351098c35a411a22ec48a8f30da238347b5a446f3ae5ff5f9565469b6ef294e6 with infra container: default/busybox/POD" id=07748889-98e6-4ab2-8fc9-80835eeb07ad name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 06:23:48 no-preload-713838 crio[771]: time="2025-12-10T06:23:48.919364628Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=98f87bdf-9531-41f1-bbdb-680806aebe40 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:23:48 no-preload-713838 crio[771]: time="2025-12-10T06:23:48.919529176Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=98f87bdf-9531-41f1-bbdb-680806aebe40 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:23:48 no-preload-713838 crio[771]: time="2025-12-10T06:23:48.919631905Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=98f87bdf-9531-41f1-bbdb-680806aebe40 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:23:48 no-preload-713838 crio[771]: time="2025-12-10T06:23:48.920499623Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=695c6009-e1b6-4ab7-a762-17bb3ea233cf name=/runtime.v1.ImageService/PullImage
	Dec 10 06:23:48 no-preload-713838 crio[771]: time="2025-12-10T06:23:48.922127942Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 10 06:23:50 no-preload-713838 crio[771]: time="2025-12-10T06:23:50.327361433Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=695c6009-e1b6-4ab7-a762-17bb3ea233cf name=/runtime.v1.ImageService/PullImage
	Dec 10 06:23:50 no-preload-713838 crio[771]: time="2025-12-10T06:23:50.327946836Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=2c603ae9-3698-471b-af82-1fa1ebd65d47 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:23:50 no-preload-713838 crio[771]: time="2025-12-10T06:23:50.329549126Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=efa5ec83-c701-4af1-ae0a-f06bb3f20eb2 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:23:50 no-preload-713838 crio[771]: time="2025-12-10T06:23:50.333508668Z" level=info msg="Creating container: default/busybox/busybox" id=8b05f37d-957d-48ba-8fb5-a02a8b8ff299 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:23:50 no-preload-713838 crio[771]: time="2025-12-10T06:23:50.333620282Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:23:50 no-preload-713838 crio[771]: time="2025-12-10T06:23:50.337358787Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:23:50 no-preload-713838 crio[771]: time="2025-12-10T06:23:50.337921376Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:23:50 no-preload-713838 crio[771]: time="2025-12-10T06:23:50.367079411Z" level=info msg="Created container dbc28a541ed93be2d0d012035a810c05f5dc66f1e526323d2dfafdbc50a47274: default/busybox/busybox" id=8b05f37d-957d-48ba-8fb5-a02a8b8ff299 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:23:50 no-preload-713838 crio[771]: time="2025-12-10T06:23:50.367839385Z" level=info msg="Starting container: dbc28a541ed93be2d0d012035a810c05f5dc66f1e526323d2dfafdbc50a47274" id=a56bab62-b6f5-4464-b5c1-08033cf177ee name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 06:23:50 no-preload-713838 crio[771]: time="2025-12-10T06:23:50.370240202Z" level=info msg="Started container" PID=2868 containerID=dbc28a541ed93be2d0d012035a810c05f5dc66f1e526323d2dfafdbc50a47274 description=default/busybox/busybox id=a56bab62-b6f5-4464-b5c1-08033cf177ee name=/runtime.v1.RuntimeService/StartContainer sandboxID=351098c35a411a22ec48a8f30da238347b5a446f3ae5ff5f9565469b6ef294e6
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	dbc28a541ed93       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   351098c35a411       busybox                                     default
	c9c5bb3f78f04       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                      12 seconds ago      Running             coredns                   0                   88105544bb1db       coredns-7d764666f9-hr4gk                    kube-system
	1e3fa1bb65f53       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   ed3e523f412cb       storage-provisioner                         kube-system
	0b25adf891085       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    23 seconds ago      Running             kindnet-cni               0                   ee838b79cde46       kindnet-28s4q                               kube-system
	19553d14d6372       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                      25 seconds ago      Running             kube-proxy                0                   17a712368115d       kube-proxy-c62hk                            kube-system
	d944e96f274ed       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                      35 seconds ago      Running             etcd                      0                   2b281b3c55821       etcd-no-preload-713838                      kube-system
	2e18d5a8af61c       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b                                      35 seconds ago      Running             kube-apiserver            0                   83afc740022b0       kube-apiserver-no-preload-713838            kube-system
	00ea458e73e86       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                      35 seconds ago      Running             kube-scheduler            0                   c5025dace6046       kube-scheduler-no-preload-713838            kube-system
	02a135a611aa1       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                      35 seconds ago      Running             kube-controller-manager   0                   6425104d146cc       kube-controller-manager-no-preload-713838   kube-system
	
	
	==> coredns [c9c5bb3f78f042416b5672831acb03709ef48cbc6ae7144ea56a73334c7e9242] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:58593 - 35148 "HINFO IN 3994870623106595690.9193886481359316256. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.030122507s
	
	
	==> describe nodes <==
	Name:               no-preload-713838
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-713838
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=edc6abd3c0573b88c7a02dc35aa0b985627fa3e9
	                    minikube.k8s.io/name=no-preload-713838
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T06_23_27_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 06:23:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-713838
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 06:23:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 06:23:57 +0000   Wed, 10 Dec 2025 06:23:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 06:23:57 +0000   Wed, 10 Dec 2025 06:23:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 06:23:57 +0000   Wed, 10 Dec 2025 06:23:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Dec 2025 06:23:57 +0000   Wed, 10 Dec 2025 06:23:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    no-preload-713838
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 0992b7e47f4f804d2f02c3066938a460
	  System UUID:                a0db2673-3e21-49dd-84c2-7b2766bdcea4
	  Boot ID:                    cce7104c-1270-4b6b-af66-b04ce0de633c
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-7d764666f9-hr4gk                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-no-preload-713838                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         33s
	  kube-system                 kindnet-28s4q                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-no-preload-713838             250m (3%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-no-preload-713838    200m (2%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-c62hk                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-no-preload-713838             100m (1%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  28s   node-controller  Node no-preload-713838 event: Registered Node no-preload-713838 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 67 21 4d 5d 12 08 06
	[Dec10 06:21] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 7e b1 cc cb 4a c1 08 06
	[  +0.000451] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f6 67 21 4d 5d 12 08 06
	[ +47.984386] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 4e cf 84 e2 64 08 06
	[  +1.136322] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000001] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0e cf a5 c8 c4 7c 08 06
	[  +0.000001] ll header: 00000000: ff ff ff ff ff ff 4e 6d f9 62 31 99 08 06
	[Dec10 06:22] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 16 a6 95 b8 c2 8a 08 06
	[ +10.598490] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 42 35 90 e5 6e e9 08 06
	[  +0.000401] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 36 4e cf 84 e2 64 08 06
	[ +28.872835] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a 53 b5 51 38 03 08 06
	[  +0.000413] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4e 6d f9 62 31 99 08 06
	[  +9.820727] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6e c5 0b 85 ba 10 08 06
	[  +0.000485] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 16 a6 95 b8 c2 8a 08 06
	
	
	==> etcd [d944e96f274ed2eda4be9bf08a6e816bfe2ab0849769791137af45bfb43bc097] <==
	{"level":"warn","ts":"2025-12-10T06:23:23.400064Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:23:23.408756Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:23:23.415830Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:23:23.422750Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:23:23.429599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:23:23.436831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:23:23.444704Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:23:23.452644Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:23:23.466074Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:23:23.475892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:23:23.484021Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:23:23.492706Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:23:23.499626Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:23:23.506512Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:23:23.513667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:23:23.521524Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:23:23.530231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:23:23.538565Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:23:23.546868Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:23:23.555525Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:23:23.569398Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:23:23.576900Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:23:23.583682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:23:23.590996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:23:23.645531Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42098","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 06:23:58 up  1:06,  0 user,  load average: 5.47, 4.76, 2.85
	Linux no-preload-713838 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0b25adf891085f9c1d25f375442bdc62542caf484b0ffc5c2328cec2289c8556] <==
	I1210 06:23:34.216984       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1210 06:23:34.217245       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1210 06:23:34.217397       1 main.go:148] setting mtu 1500 for CNI 
	I1210 06:23:34.217413       1 main.go:178] kindnetd IP family: "ipv4"
	I1210 06:23:34.217432       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-10T06:23:34Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1210 06:23:34.514458       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1210 06:23:34.514559       1 controller.go:381] "Waiting for informer caches to sync"
	I1210 06:23:34.514575       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1210 06:23:34.515321       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1210 06:23:34.815492       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1210 06:23:34.815534       1 metrics.go:72] Registering metrics
	I1210 06:23:34.815609       1 controller.go:711] "Syncing nftables rules"
	I1210 06:23:44.514863       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1210 06:23:44.514936       1 main.go:301] handling current node
	I1210 06:23:54.517589       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1210 06:23:54.517627       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2e18d5a8af61c779314fd93e29f59b502d515c6df9ff73f54ce21ac86ca5c946] <==
	I1210 06:23:24.137100       1 cache.go:39] Caches are synced for autoregister controller
	I1210 06:23:24.137438       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1210 06:23:24.141227       1 controller.go:667] quota admission added evaluator for: namespaces
	I1210 06:23:24.141630       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1210 06:23:24.142113       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 06:23:24.151962       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 06:23:24.331698       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1210 06:23:25.042199       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1210 06:23:25.048047       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1210 06:23:25.048069       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1210 06:23:25.636690       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1210 06:23:25.694663       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1210 06:23:25.849382       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1210 06:23:25.856420       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1210 06:23:25.857802       1 controller.go:667] quota admission added evaluator for: endpoints
	I1210 06:23:25.862250       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1210 06:23:26.078790       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1210 06:23:26.809012       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1210 06:23:26.820727       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1210 06:23:26.830434       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1210 06:23:31.631277       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 06:23:31.637336       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 06:23:32.029971       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1210 06:23:32.078316       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1210 06:23:56.684500       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8443->192.168.103.1:47926: use of closed network connection
	
	
	==> kube-controller-manager [02a135a611aa1e56c5192cbc06933bfb0d50803c28db37ad75a50890251ee8de] <==
	I1210 06:23:30.885028       1 shared_informer.go:377] "Caches are synced"
	I1210 06:23:30.892640       1 range_allocator.go:433] "Set node PodCIDR" node="no-preload-713838" podCIDRs=["10.244.0.0/24"]
	I1210 06:23:30.884950       1 shared_informer.go:377] "Caches are synced"
	I1210 06:23:30.884924       1 shared_informer.go:377] "Caches are synced"
	I1210 06:23:30.885651       1 shared_informer.go:377] "Caches are synced"
	I1210 06:23:30.885642       1 shared_informer.go:377] "Caches are synced"
	I1210 06:23:30.885712       1 shared_informer.go:377] "Caches are synced"
	I1210 06:23:30.884934       1 shared_informer.go:377] "Caches are synced"
	I1210 06:23:30.885728       1 shared_informer.go:377] "Caches are synced"
	I1210 06:23:30.885739       1 shared_informer.go:377] "Caches are synced"
	I1210 06:23:30.885764       1 shared_informer.go:377] "Caches are synced"
	I1210 06:23:30.885779       1 shared_informer.go:377] "Caches are synced"
	I1210 06:23:30.885791       1 shared_informer.go:377] "Caches are synced"
	I1210 06:23:30.885802       1 shared_informer.go:377] "Caches are synced"
	I1210 06:23:30.885682       1 shared_informer.go:377] "Caches are synced"
	I1210 06:23:30.897367       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1210 06:23:30.897527       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="no-preload-713838"
	I1210 06:23:30.897627       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1210 06:23:30.899628       1 shared_informer.go:377] "Caches are synced"
	I1210 06:23:30.904397       1 shared_informer.go:370] "Waiting for caches to sync"
	I1210 06:23:30.985401       1 shared_informer.go:377] "Caches are synced"
	I1210 06:23:30.985424       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1210 06:23:30.985429       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1210 06:23:31.005000       1 shared_informer.go:377] "Caches are synced"
	I1210 06:23:45.900346       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [19553d14d6372ab54306adbf2a6a66305304d6ccf24ffad9bef5fb2b1f510a8b] <==
	I1210 06:23:32.695563       1 server_linux.go:53] "Using iptables proxy"
	I1210 06:23:32.780307       1 shared_informer.go:370] "Waiting for caches to sync"
	I1210 06:23:32.880589       1 shared_informer.go:377] "Caches are synced"
	I1210 06:23:32.880707       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1210 06:23:32.880850       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 06:23:32.915357       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1210 06:23:32.915428       1 server_linux.go:136] "Using iptables Proxier"
	I1210 06:23:32.924366       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 06:23:32.925361       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1210 06:23:32.925724       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 06:23:32.929077       1 config.go:200] "Starting service config controller"
	I1210 06:23:32.933453       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 06:23:32.929669       1 config.go:106] "Starting endpoint slice config controller"
	I1210 06:23:32.930265       1 config.go:309] "Starting node config controller"
	I1210 06:23:32.934500       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 06:23:32.934545       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 06:23:32.929693       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 06:23:32.934604       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 06:23:32.935172       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 06:23:33.033848       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1210 06:23:33.035712       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1210 06:23:33.035995       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [00ea458e73e8619fbdd7b159b75478657734e3953a1e72e44dd8d8a0724c1380] <==
	E1210 06:23:25.000972       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="pods is forbidden: User \"system:kube-scheduler\" cannot watch resource \"pods\" in API group \"\" at the cluster scope"
	E1210 06:23:25.002122       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1210 06:23:25.009780       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumes\" in API group \"\" at the cluster scope"
	E1210 06:23:25.011127       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1210 06:23:25.012045       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="namespaces is forbidden: User \"system:kube-scheduler\" cannot watch resource \"namespaces\" in API group \"\" at the cluster scope"
	E1210 06:23:25.012965       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1210 06:23:25.040348       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope"
	E1210 06:23:25.041584       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1210 06:23:25.161206       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot watch resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope"
	E1210 06:23:25.162389       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1210 06:23:25.167562       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="nodes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope"
	E1210 06:23:25.170712       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1210 06:23:25.251669       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope"
	E1210 06:23:25.252803       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1210 06:23:25.258124       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="services is forbidden: User \"system:kube-scheduler\" cannot watch resource \"services\" in API group \"\" at the cluster scope"
	E1210 06:23:25.259200       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1210 06:23:25.270710       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope"
	E1210 06:23:25.270710       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope"
	E1210 06:23:25.271899       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1210 06:23:25.271995       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1210 06:23:25.351076       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicationcontrollers\" in API group \"\" at the cluster scope"
	E1210 06:23:25.352155       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1210 06:23:25.357325       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope"
	E1210 06:23:25.358333       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	I1210 06:23:26.597572       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 10 06:23:32 no-preload-713838 kubelet[2184]: I1210 06:23:32.145865    2184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24hcc\" (UniqueName: \"kubernetes.io/projected/b48eb137-310e-4bea-a99e-bb776ad77807-kube-api-access-24hcc\") pod \"kube-proxy-c62hk\" (UID: \"b48eb137-310e-4bea-a99e-bb776ad77807\") " pod="kube-system/kube-proxy-c62hk"
	Dec 10 06:23:32 no-preload-713838 kubelet[2184]: I1210 06:23:32.145937    2184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/55436b1b-68c3-4f73-8929-29ec9ae87ce6-cni-cfg\") pod \"kindnet-28s4q\" (UID: \"55436b1b-68c3-4f73-8929-29ec9ae87ce6\") " pod="kube-system/kindnet-28s4q"
	Dec 10 06:23:32 no-preload-713838 kubelet[2184]: I1210 06:23:32.145986    2184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b48eb137-310e-4bea-a99e-bb776ad77807-lib-modules\") pod \"kube-proxy-c62hk\" (UID: \"b48eb137-310e-4bea-a99e-bb776ad77807\") " pod="kube-system/kube-proxy-c62hk"
	Dec 10 06:23:32 no-preload-713838 kubelet[2184]: I1210 06:23:32.146011    2184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j256\" (UniqueName: \"kubernetes.io/projected/55436b1b-68c3-4f73-8929-29ec9ae87ce6-kube-api-access-2j256\") pod \"kindnet-28s4q\" (UID: \"55436b1b-68c3-4f73-8929-29ec9ae87ce6\") " pod="kube-system/kindnet-28s4q"
	Dec 10 06:23:34 no-preload-713838 kubelet[2184]: E1210 06:23:34.212297    2184 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-713838" containerName="kube-scheduler"
	Dec 10 06:23:34 no-preload-713838 kubelet[2184]: I1210 06:23:34.223686    2184 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-c62hk" podStartSLOduration=2.223670136 podStartE2EDuration="2.223670136s" podCreationTimestamp="2025-12-10 06:23:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 06:23:32.812052818 +0000 UTC m=+6.210719700" watchObservedRunningTime="2025-12-10 06:23:34.223670136 +0000 UTC m=+7.622337017"
	Dec 10 06:23:36 no-preload-713838 kubelet[2184]: E1210 06:23:36.063613    2184 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-713838" containerName="kube-apiserver"
	Dec 10 06:23:36 no-preload-713838 kubelet[2184]: I1210 06:23:36.089063    2184 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-28s4q" podStartSLOduration=2.539906385 podStartE2EDuration="4.089041235s" podCreationTimestamp="2025-12-10 06:23:32 +0000 UTC" firstStartedPulling="2025-12-10 06:23:32.420410388 +0000 UTC m=+5.819077261" lastFinishedPulling="2025-12-10 06:23:33.969545246 +0000 UTC m=+7.368212111" observedRunningTime="2025-12-10 06:23:34.82616124 +0000 UTC m=+8.224828121" watchObservedRunningTime="2025-12-10 06:23:36.089041235 +0000 UTC m=+9.487708117"
	Dec 10 06:23:36 no-preload-713838 kubelet[2184]: E1210 06:23:36.819118    2184 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-713838" containerName="etcd"
	Dec 10 06:23:38 no-preload-713838 kubelet[2184]: E1210 06:23:38.235595    2184 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-713838" containerName="kube-controller-manager"
	Dec 10 06:23:44 no-preload-713838 kubelet[2184]: E1210 06:23:44.217411    2184 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-713838" containerName="kube-scheduler"
	Dec 10 06:23:45 no-preload-713838 kubelet[2184]: I1210 06:23:45.100276    2184 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Dec 10 06:23:45 no-preload-713838 kubelet[2184]: I1210 06:23:45.148439    2184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb8gm\" (UniqueName: \"kubernetes.io/projected/2d1d5353-6d76-4f61-9e66-12eee045a735-kube-api-access-mb8gm\") pod \"coredns-7d764666f9-hr4gk\" (UID: \"2d1d5353-6d76-4f61-9e66-12eee045a735\") " pod="kube-system/coredns-7d764666f9-hr4gk"
	Dec 10 06:23:45 no-preload-713838 kubelet[2184]: I1210 06:23:45.148496    2184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d1d5353-6d76-4f61-9e66-12eee045a735-config-volume\") pod \"coredns-7d764666f9-hr4gk\" (UID: \"2d1d5353-6d76-4f61-9e66-12eee045a735\") " pod="kube-system/coredns-7d764666f9-hr4gk"
	Dec 10 06:23:45 no-preload-713838 kubelet[2184]: I1210 06:23:45.248767    2184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e89d4b38-da41-4612-8cf9-1440b142a9af-tmp\") pod \"storage-provisioner\" (UID: \"e89d4b38-da41-4612-8cf9-1440b142a9af\") " pod="kube-system/storage-provisioner"
	Dec 10 06:23:45 no-preload-713838 kubelet[2184]: I1210 06:23:45.248992    2184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l29ct\" (UniqueName: \"kubernetes.io/projected/e89d4b38-da41-4612-8cf9-1440b142a9af-kube-api-access-l29ct\") pod \"storage-provisioner\" (UID: \"e89d4b38-da41-4612-8cf9-1440b142a9af\") " pod="kube-system/storage-provisioner"
	Dec 10 06:23:45 no-preload-713838 kubelet[2184]: E1210 06:23:45.824359    2184 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-hr4gk" containerName="coredns"
	Dec 10 06:23:45 no-preload-713838 kubelet[2184]: I1210 06:23:45.833848    2184 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.833827354 podStartE2EDuration="12.833827354s" podCreationTimestamp="2025-12-10 06:23:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 06:23:45.833782077 +0000 UTC m=+19.232448960" watchObservedRunningTime="2025-12-10 06:23:45.833827354 +0000 UTC m=+19.232494235"
	Dec 10 06:23:46 no-preload-713838 kubelet[2184]: E1210 06:23:46.068264    2184 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-713838" containerName="kube-apiserver"
	Dec 10 06:23:46 no-preload-713838 kubelet[2184]: I1210 06:23:46.079301    2184 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-hr4gk" podStartSLOduration=14.079281509 podStartE2EDuration="14.079281509s" podCreationTimestamp="2025-12-10 06:23:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 06:23:45.846335523 +0000 UTC m=+19.245002415" watchObservedRunningTime="2025-12-10 06:23:46.079281509 +0000 UTC m=+19.477948391"
	Dec 10 06:23:46 no-preload-713838 kubelet[2184]: E1210 06:23:46.820670    2184 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-713838" containerName="etcd"
	Dec 10 06:23:46 no-preload-713838 kubelet[2184]: E1210 06:23:46.826334    2184 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-hr4gk" containerName="coredns"
	Dec 10 06:23:47 no-preload-713838 kubelet[2184]: E1210 06:23:47.828282    2184 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-hr4gk" containerName="coredns"
	Dec 10 06:23:48 no-preload-713838 kubelet[2184]: I1210 06:23:48.670958    2184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9qcq\" (UniqueName: \"kubernetes.io/projected/da64c2c1-2faf-4ff5-9b06-95db44ebc605-kube-api-access-p9qcq\") pod \"busybox\" (UID: \"da64c2c1-2faf-4ff5-9b06-95db44ebc605\") " pod="default/busybox"
	Dec 10 06:23:50 no-preload-713838 kubelet[2184]: I1210 06:23:50.846559    2184 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.437896558 podStartE2EDuration="2.846535908s" podCreationTimestamp="2025-12-10 06:23:48 +0000 UTC" firstStartedPulling="2025-12-10 06:23:48.920061473 +0000 UTC m=+22.318728334" lastFinishedPulling="2025-12-10 06:23:50.328700823 +0000 UTC m=+23.727367684" observedRunningTime="2025-12-10 06:23:50.846425546 +0000 UTC m=+24.245092428" watchObservedRunningTime="2025-12-10 06:23:50.846535908 +0000 UTC m=+24.245202790"
	
	
	==> storage-provisioner [1e3fa1bb65f532fc9190c9f7f328924b9a8c9a24721724eaad70330611dba8d3] <==
	I1210 06:23:45.502275       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1210 06:23:45.511216       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1210 06:23:45.511282       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1210 06:23:45.514136       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:23:45.520081       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1210 06:23:45.520257       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1210 06:23:45.520522       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-713838_0d350e6d-a91c-4fb6-8e64-0fa252fa22fa!
	I1210 06:23:45.520444       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"13195869-7f7c-4acf-98a0-df0b10b14e40", APIVersion:"v1", ResourceVersion:"412", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-713838_0d350e6d-a91c-4fb6-8e64-0fa252fa22fa became leader
	W1210 06:23:45.526987       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:23:45.531632       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1210 06:23:45.621502       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-713838_0d350e6d-a91c-4fb6-8e64-0fa252fa22fa!
	W1210 06:23:47.535171       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:23:47.542537       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:23:49.545962       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:23:49.550204       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:23:51.553808       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:23:51.559246       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:23:53.563033       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:23:53.567396       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:23:55.570851       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:23:55.576570       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:23:57.580435       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:23:57.584517       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-713838 -n no-preload-713838
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-713838 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-133470 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-133470 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (290.122275ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:23:58Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-133470 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-133470 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-133470 describe deploy/metrics-server -n kube-system: exit status 1 (67.243428ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-133470 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-133470
helpers_test.go:244: (dbg) docker inspect embed-certs-133470:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3a1f3f3228b1ec53cd9f63c675c9b5091d68de47bcdbf1b5b82a14243c07aa76",
	        "Created": "2025-12-10T06:23:10.449450924Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 312249,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T06:23:10.650621749Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9dfcc37acf4d8ed51daae49d651516447e95ced4bb0b0783e8c53cb79a74f008",
	        "ResolvConfPath": "/var/lib/docker/containers/3a1f3f3228b1ec53cd9f63c675c9b5091d68de47bcdbf1b5b82a14243c07aa76/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3a1f3f3228b1ec53cd9f63c675c9b5091d68de47bcdbf1b5b82a14243c07aa76/hostname",
	        "HostsPath": "/var/lib/docker/containers/3a1f3f3228b1ec53cd9f63c675c9b5091d68de47bcdbf1b5b82a14243c07aa76/hosts",
	        "LogPath": "/var/lib/docker/containers/3a1f3f3228b1ec53cd9f63c675c9b5091d68de47bcdbf1b5b82a14243c07aa76/3a1f3f3228b1ec53cd9f63c675c9b5091d68de47bcdbf1b5b82a14243c07aa76-json.log",
	        "Name": "/embed-certs-133470",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-133470:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-133470",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3a1f3f3228b1ec53cd9f63c675c9b5091d68de47bcdbf1b5b82a14243c07aa76",
	                "LowerDir": "/var/lib/docker/overlay2/438187e60f45e0a217a5260189d029ff21902b801168e01bb30941ed2d899de5-init/diff:/var/lib/docker/overlay2/5745aee6e8b05b3a4cc4ad6aee891df9d6438d830895f70bd2a764a976802708/diff",
	                "MergedDir": "/var/lib/docker/overlay2/438187e60f45e0a217a5260189d029ff21902b801168e01bb30941ed2d899de5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/438187e60f45e0a217a5260189d029ff21902b801168e01bb30941ed2d899de5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/438187e60f45e0a217a5260189d029ff21902b801168e01bb30941ed2d899de5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-133470",
	                "Source": "/var/lib/docker/volumes/embed-certs-133470/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-133470",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-133470",
	                "name.minikube.sigs.k8s.io": "embed-certs-133470",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "124f590f5d8269915ee8ddb51e399ca90c8fc5387fb32dcab66c4117bc8874ad",
	            "SandboxKey": "/var/run/docker/netns/124f590f5d82",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-133470": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c997c342a102de8ded4e3e9d1b30c87213863ef3e6af404e57b008495685711b",
	                    "EndpointID": "82c3277c5bcaf7aece88d3ee1493224b1df1aa1d02bf124f13a497fa0ab64d37",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "5a:56:f5:e7:0e:15",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-133470",
	                        "3a1f3f3228b1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-133470 -n embed-certs-133470
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-133470 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-133470 logs -n 25: (1.112877362s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-201263 sudo cat /etc/docker/daemon.json                                                                                                                                                                                             │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ ssh     │ -p bridge-201263 sudo docker system info                                                                                                                                                                                                      │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ ssh     │ -p bridge-201263 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                     │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ ssh     │ -p bridge-201263 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ ssh     │ -p bridge-201263 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ ssh     │ -p bridge-201263 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ ssh     │ -p bridge-201263 sudo cri-dockerd --version                                                                                                                                                                                                   │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ ssh     │ -p bridge-201263 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ ssh     │ -p bridge-201263 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ ssh     │ -p bridge-201263 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ ssh     │ -p bridge-201263 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ ssh     │ -p bridge-201263 sudo containerd config dump                                                                                                                                                                                                  │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ ssh     │ -p bridge-201263 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ ssh     │ -p bridge-201263 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ ssh     │ -p bridge-201263 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ ssh     │ -p bridge-201263 sudo crio config                                                                                                                                                                                                             │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ delete  │ -p bridge-201263                                                                                                                                                                                                                              │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ delete  │ -p disable-driver-mounts-998062                                                                                                                                                                                                               │ disable-driver-mounts-998062 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ start   │ -p default-k8s-diff-port-643991 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-643991 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-424086 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-424086       │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ stop    │ -p old-k8s-version-424086 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-424086       │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-424086 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-424086       │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ start   │ -p old-k8s-version-424086 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-424086       │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-713838 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-713838            │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-133470 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-133470           │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:23:54
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:23:54.849893  321295 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:23:54.850002  321295 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:23:54.850014  321295 out.go:374] Setting ErrFile to fd 2...
	I1210 06:23:54.850023  321295 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:23:54.850244  321295 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8832/.minikube/bin
	I1210 06:23:54.850736  321295 out.go:368] Setting JSON to false
	I1210 06:23:54.852093  321295 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3986,"bootTime":1765343849,"procs":314,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 06:23:54.852152  321295 start.go:143] virtualization: kvm guest
	I1210 06:23:54.854159  321295 out.go:179] * [old-k8s-version-424086] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 06:23:54.855481  321295 notify.go:221] Checking for updates...
	I1210 06:23:54.855500  321295 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 06:23:54.857046  321295 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:23:54.858568  321295 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-8832/kubeconfig
	I1210 06:23:54.860006  321295 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-8832/.minikube
	I1210 06:23:54.861357  321295 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 06:23:54.863080  321295 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:23:54.864997  321295 config.go:182] Loaded profile config "old-k8s-version-424086": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1210 06:23:54.867097  321295 out.go:179] * Kubernetes 1.34.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.2
	I1210 06:23:54.868348  321295 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:23:54.894028  321295 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 06:23:54.894194  321295 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:23:54.951594  321295 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-10 06:23:54.940554655 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:23:54.951691  321295 docker.go:319] overlay module found
	I1210 06:23:54.953783  321295 out.go:179] * Using the docker driver based on existing profile
	I1210 06:23:54.954954  321295 start.go:309] selected driver: docker
	I1210 06:23:54.954968  321295 start.go:927] validating driver "docker" against &{Name:old-k8s-version-424086 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-424086 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:23:54.955066  321295 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:23:54.955686  321295 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:23:55.016148  321295 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-10 06:23:55.004664726 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:23:55.016429  321295 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 06:23:55.016454  321295 cni.go:84] Creating CNI manager for ""
	I1210 06:23:55.016535  321295 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:23:55.016576  321295 start.go:353] cluster config:
	{Name:old-k8s-version-424086 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-424086 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:23:55.018624  321295 out.go:179] * Starting "old-k8s-version-424086" primary control-plane node in "old-k8s-version-424086" cluster
	I1210 06:23:55.019931  321295 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 06:23:55.021367  321295 out.go:179] * Pulling base image v0.0.48-1765319469-22089 ...
	I1210 06:23:55.022643  321295 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1210 06:23:55.022679  321295 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-8832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1210 06:23:55.022689  321295 cache.go:65] Caching tarball of preloaded images
	I1210 06:23:55.022747  321295 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon
	I1210 06:23:55.022805  321295 preload.go:238] Found /home/jenkins/minikube-integration/22089-8832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 06:23:55.022823  321295 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1210 06:23:55.022932  321295 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/old-k8s-version-424086/config.json ...
	I1210 06:23:55.045444  321295 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon, skipping pull
	I1210 06:23:55.045478  321295 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca exists in daemon, skipping load
	I1210 06:23:55.045501  321295 cache.go:243] Successfully downloaded all kic artifacts
	I1210 06:23:55.045535  321295 start.go:360] acquireMachinesLock for old-k8s-version-424086: {Name:mk21a5d7b5b879531809d880eb98ef4b6572dda2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:23:55.045600  321295 start.go:364] duration metric: took 44.502µs to acquireMachinesLock for "old-k8s-version-424086"
	I1210 06:23:55.045623  321295 start.go:96] Skipping create...Using existing machine configuration
	I1210 06:23:55.045633  321295 fix.go:54] fixHost starting: 
	I1210 06:23:55.045840  321295 cli_runner.go:164] Run: docker container inspect old-k8s-version-424086 --format={{.State.Status}}
	I1210 06:23:55.065163  321295 fix.go:112] recreateIfNeeded on old-k8s-version-424086: state=Stopped err=<nil>
	W1210 06:23:55.065245  321295 fix.go:138] unexpected machine state, will restart: <nil>
	W1210 06:23:53.247274  314350 node_ready.go:57] node "default-k8s-diff-port-643991" has "Ready":"False" status (will retry)
	W1210 06:23:55.247865  314350 node_ready.go:57] node "default-k8s-diff-port-643991" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Dec 10 06:23:48 embed-certs-133470 crio[768]: time="2025-12-10T06:23:48.15756538Z" level=info msg="Starting container: 18abe4e036538e65ada54d08dcffa99699aca755fbf40fb5cd0e6fd40c5b3550" id=763964ea-2a54-419a-83b4-23fda9321cd2 name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 06:23:48 embed-certs-133470 crio[768]: time="2025-12-10T06:23:48.159804387Z" level=info msg="Started container" PID=1839 containerID=18abe4e036538e65ada54d08dcffa99699aca755fbf40fb5cd0e6fd40c5b3550 description=kube-system/coredns-66bc5c9577-gw75x/coredns id=763964ea-2a54-419a-83b4-23fda9321cd2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f7404c66b601da9f4d32358ead9d44929557d739b1e817466090f56650d56931
	Dec 10 06:23:51 embed-certs-133470 crio[768]: time="2025-12-10T06:23:51.166723012Z" level=info msg="Running pod sandbox: default/busybox/POD" id=ab7d7b14-c268-4425-b718-8d8df44bfe10 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 06:23:51 embed-certs-133470 crio[768]: time="2025-12-10T06:23:51.166796002Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:23:51 embed-certs-133470 crio[768]: time="2025-12-10T06:23:51.171361152Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:15c4528cbb4a656f69c975526aeb25835723f2c98934c69ee512cbe99dd7064a UID:711c9c67-e967-4f68-9f76-d8694d86d75f NetNS:/var/run/netns/97afb74d-d94d-400b-8d60-bdf1e6abcf6a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000d0c578}] Aliases:map[]}"
	Dec 10 06:23:51 embed-certs-133470 crio[768]: time="2025-12-10T06:23:51.171401673Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 10 06:23:51 embed-certs-133470 crio[768]: time="2025-12-10T06:23:51.181896583Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:15c4528cbb4a656f69c975526aeb25835723f2c98934c69ee512cbe99dd7064a UID:711c9c67-e967-4f68-9f76-d8694d86d75f NetNS:/var/run/netns/97afb74d-d94d-400b-8d60-bdf1e6abcf6a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000d0c578}] Aliases:map[]}"
	Dec 10 06:23:51 embed-certs-133470 crio[768]: time="2025-12-10T06:23:51.18202639Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 10 06:23:51 embed-certs-133470 crio[768]: time="2025-12-10T06:23:51.182781301Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 10 06:23:51 embed-certs-133470 crio[768]: time="2025-12-10T06:23:51.183588469Z" level=info msg="Ran pod sandbox 15c4528cbb4a656f69c975526aeb25835723f2c98934c69ee512cbe99dd7064a with infra container: default/busybox/POD" id=ab7d7b14-c268-4425-b718-8d8df44bfe10 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 06:23:51 embed-certs-133470 crio[768]: time="2025-12-10T06:23:51.184849698Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=2062fb79-374c-495b-8c0e-b04ec1da041a name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:23:51 embed-certs-133470 crio[768]: time="2025-12-10T06:23:51.184989457Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=2062fb79-374c-495b-8c0e-b04ec1da041a name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:23:51 embed-certs-133470 crio[768]: time="2025-12-10T06:23:51.185023805Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=2062fb79-374c-495b-8c0e-b04ec1da041a name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:23:51 embed-certs-133470 crio[768]: time="2025-12-10T06:23:51.185813087Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f4d43df0-0e17-48ba-9e28-16c63dd7c1aa name=/runtime.v1.ImageService/PullImage
	Dec 10 06:23:51 embed-certs-133470 crio[768]: time="2025-12-10T06:23:51.187620701Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 10 06:23:52 embed-certs-133470 crio[768]: time="2025-12-10T06:23:52.427395926Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=f4d43df0-0e17-48ba-9e28-16c63dd7c1aa name=/runtime.v1.ImageService/PullImage
	Dec 10 06:23:52 embed-certs-133470 crio[768]: time="2025-12-10T06:23:52.428143285Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3434a9c0-4c80-4921-9880-74d9d74a4ea4 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:23:52 embed-certs-133470 crio[768]: time="2025-12-10T06:23:52.429608197Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b740ae8c-07d7-436f-a199-ca299f277588 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:23:52 embed-certs-133470 crio[768]: time="2025-12-10T06:23:52.433180829Z" level=info msg="Creating container: default/busybox/busybox" id=7919224a-24e8-4476-8a4e-23a6e22f06bc name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:23:52 embed-certs-133470 crio[768]: time="2025-12-10T06:23:52.433317599Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:23:52 embed-certs-133470 crio[768]: time="2025-12-10T06:23:52.437257747Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:23:52 embed-certs-133470 crio[768]: time="2025-12-10T06:23:52.437881586Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:23:52 embed-certs-133470 crio[768]: time="2025-12-10T06:23:52.468662707Z" level=info msg="Created container a464be8cfb44fe79f1de83206f2b846fcfde807713c162c6657d3d39092ff783: default/busybox/busybox" id=7919224a-24e8-4476-8a4e-23a6e22f06bc name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:23:52 embed-certs-133470 crio[768]: time="2025-12-10T06:23:52.469295446Z" level=info msg="Starting container: a464be8cfb44fe79f1de83206f2b846fcfde807713c162c6657d3d39092ff783" id=fc6f6cb8-c983-477d-872a-011578356af6 name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 06:23:52 embed-certs-133470 crio[768]: time="2025-12-10T06:23:52.471117777Z" level=info msg="Started container" PID=1917 containerID=a464be8cfb44fe79f1de83206f2b846fcfde807713c162c6657d3d39092ff783 description=default/busybox/busybox id=fc6f6cb8-c983-477d-872a-011578356af6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=15c4528cbb4a656f69c975526aeb25835723f2c98934c69ee512cbe99dd7064a
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	a464be8cfb44f       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   6 seconds ago       Running             busybox                   0                   15c4528cbb4a6       busybox                                      default
	18abe4e036538       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 seconds ago      Running             coredns                   0                   f7404c66b601d       coredns-66bc5c9577-gw75x                     kube-system
	898dae7e4621c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 seconds ago      Running             storage-provisioner       0                   6f63f14689019       storage-provisioner                          kube-system
	53a10738511a2       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                      22 seconds ago      Running             kube-proxy                0                   6129ea59bb0b5       kube-proxy-fkdk9                             kube-system
	76602d49bfd8a       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      22 seconds ago      Running             kindnet-cni               0                   692b80f991381       kindnet-zhm6w                                kube-system
	8316cf066afe6       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                      33 seconds ago      Running             kube-controller-manager   0                   f907bb3b0ff63       kube-controller-manager-embed-certs-133470   kube-system
	bf64a30d13ce4       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                      33 seconds ago      Running             etcd                      0                   2eb99840d2d91       etcd-embed-certs-133470                      kube-system
	02170f96b451e       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                      33 seconds ago      Running             kube-scheduler            0                   698c13a98fbbd       kube-scheduler-embed-certs-133470            kube-system
	b7df11613bea2       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                      33 seconds ago      Running             kube-apiserver            0                   0059767a382c7       kube-apiserver-embed-certs-133470            kube-system
	
	
	==> coredns [18abe4e036538e65ada54d08dcffa99699aca755fbf40fb5cd0e6fd40c5b3550] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51060 - 7217 "HINFO IN 2184732011379211792.3317007195724870329. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.033134333s
	
	
	==> describe nodes <==
	Name:               embed-certs-133470
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-133470
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=edc6abd3c0573b88c7a02dc35aa0b985627fa3e9
	                    minikube.k8s.io/name=embed-certs-133470
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T06_23_31_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 06:23:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-133470
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 06:23:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 06:23:47 +0000   Wed, 10 Dec 2025 06:23:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 06:23:47 +0000   Wed, 10 Dec 2025 06:23:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 06:23:47 +0000   Wed, 10 Dec 2025 06:23:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Dec 2025 06:23:47 +0000   Wed, 10 Dec 2025 06:23:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-133470
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 0992b7e47f4f804d2f02c3066938a460
	  System UUID:                c679f347-b1a0-4ee9-b8eb-d12f4d1d4e6f
	  Boot ID:                    cce7104c-1270-4b6b-af66-b04ce0de633c
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-66bc5c9577-gw75x                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     23s
	  kube-system                 etcd-embed-certs-133470                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         29s
	  kube-system                 kindnet-zhm6w                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      23s
	  kube-system                 kube-apiserver-embed-certs-133470             250m (3%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-controller-manager-embed-certs-133470    200m (2%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-proxy-fkdk9                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	  kube-system                 kube-scheduler-embed-certs-133470             100m (1%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 22s                kube-proxy       
	  Normal  Starting                 34s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  34s (x8 over 34s)  kubelet          Node embed-certs-133470 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    34s (x8 over 34s)  kubelet          Node embed-certs-133470 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     34s (x8 over 34s)  kubelet          Node embed-certs-133470 status is now: NodeHasSufficientPID
	  Normal  Starting                 29s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29s                kubelet          Node embed-certs-133470 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29s                kubelet          Node embed-certs-133470 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s                kubelet          Node embed-certs-133470 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           24s                node-controller  Node embed-certs-133470 event: Registered Node embed-certs-133470 in Controller
	  Normal  NodeReady                12s                kubelet          Node embed-certs-133470 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 67 21 4d 5d 12 08 06
	[Dec10 06:21] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 7e b1 cc cb 4a c1 08 06
	[  +0.000451] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f6 67 21 4d 5d 12 08 06
	[ +47.984386] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 4e cf 84 e2 64 08 06
	[  +1.136322] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000001] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0e cf a5 c8 c4 7c 08 06
	[  +0.000001] ll header: 00000000: ff ff ff ff ff ff 4e 6d f9 62 31 99 08 06
	[Dec10 06:22] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 16 a6 95 b8 c2 8a 08 06
	[ +10.598490] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 42 35 90 e5 6e e9 08 06
	[  +0.000401] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 36 4e cf 84 e2 64 08 06
	[ +28.872835] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a 53 b5 51 38 03 08 06
	[  +0.000413] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4e 6d f9 62 31 99 08 06
	[  +9.820727] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6e c5 0b 85 ba 10 08 06
	[  +0.000485] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 16 a6 95 b8 c2 8a 08 06
	
	
	==> etcd [bf64a30d13ce4f981823916c281eed4c44c2c0b9318e20428e505493d4d08571] <==
	{"level":"warn","ts":"2025-12-10T06:23:27.630183Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:23:27.638866Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:23:27.646135Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:23:27.654708Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:23:27.663420Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:23:27.671663Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:23:27.689216Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:23:27.707066Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:23:27.715356Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:23:27.723576Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:23:27.736146Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:23:27.742412Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:23:27.751175Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:23:27.760978Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:23:27.771290Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:23:27.785567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:23:27.801123Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:23:27.810062Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:23:27.822385Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:23:27.837695Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:23:27.846187Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:23:27.855083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:23:27.927463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43318","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-10T06:23:35.712616Z","caller":"traceutil/trace.go:172","msg":"trace[1265658234] transaction","detail":"{read_only:false; response_revision:341; number_of_response:1; }","duration":"102.87071ms","start":"2025-12-10T06:23:35.609720Z","end":"2025-12-10T06:23:35.712591Z","steps":["trace[1265658234] 'process raft request'  (duration: 82.284015ms)","trace[1265658234] 'compare'  (duration: 20.484959ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-10T06:23:35.735084Z","caller":"traceutil/trace.go:172","msg":"trace[430490159] transaction","detail":"{read_only:false; response_revision:342; number_of_response:1; }","duration":"124.645638ms","start":"2025-12-10T06:23:35.610417Z","end":"2025-12-10T06:23:35.735063Z","steps":["trace[430490159] 'process raft request'  (duration: 124.548899ms)"],"step_count":1}
	
	
	==> kernel <==
	 06:23:59 up  1:06,  0 user,  load average: 5.47, 4.76, 2.85
	Linux embed-certs-133470 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [76602d49bfd8a94bc93bcb4f9991aa2af7c463bd395096cf9f45205042f16197] <==
	I1210 06:23:37.052052       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1210 06:23:37.052699       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1210 06:23:37.052895       1 main.go:148] setting mtu 1500 for CNI 
	I1210 06:23:37.052928       1 main.go:178] kindnetd IP family: "ipv4"
	I1210 06:23:37.052951       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-10T06:23:37Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1210 06:23:37.391990       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1210 06:23:37.491577       1 controller.go:381] "Waiting for informer caches to sync"
	I1210 06:23:37.491638       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1210 06:23:37.492535       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1210 06:23:37.792281       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1210 06:23:37.792322       1 metrics.go:72] Registering metrics
	I1210 06:23:37.792475       1 controller.go:711] "Syncing nftables rules"
	I1210 06:23:47.301585       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1210 06:23:47.301668       1 main.go:301] handling current node
	I1210 06:23:57.301593       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1210 06:23:57.301647       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b7df11613bea29509ea6de02c1142885aa022f95ec385192394278c3b7316df1] <==
	E1210 06:23:28.506313       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1210 06:23:28.553035       1 controller.go:667] quota admission added evaluator for: namespaces
	I1210 06:23:28.556849       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 06:23:28.556925       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1210 06:23:28.563976       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 06:23:28.564243       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1210 06:23:28.654705       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1210 06:23:29.356458       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1210 06:23:29.360766       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1210 06:23:29.360787       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1210 06:23:29.934885       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1210 06:23:29.977776       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1210 06:23:30.059746       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1210 06:23:30.066133       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1210 06:23:30.067366       1 controller.go:667] quota admission added evaluator for: endpoints
	I1210 06:23:30.072157       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1210 06:23:30.392998       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1210 06:23:30.891162       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1210 06:23:30.906460       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1210 06:23:30.917047       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1210 06:23:36.059248       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 06:23:36.082863       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 06:23:36.312558       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1210 06:23:36.396105       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1210 06:23:57.957540       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8443->192.168.94.1:39432: use of closed network connection
	
	
	==> kube-controller-manager [8316cf066afe690cd747123eac63f0f4262032faf0dca2c06fcc77daa017eaff] <==
	I1210 06:23:35.361717       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1210 06:23:35.369976       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-133470" podCIDRs=["10.244.0.0/24"]
	I1210 06:23:35.390867       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1210 06:23:35.390896       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1210 06:23:35.390935       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1210 06:23:35.390994       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1210 06:23:35.391335       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1210 06:23:35.391353       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1210 06:23:35.391499       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1210 06:23:35.391573       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1210 06:23:35.391792       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1210 06:23:35.392127       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1210 06:23:35.392161       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1210 06:23:35.392241       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1210 06:23:35.393744       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1210 06:23:35.393771       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1210 06:23:35.394053       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1210 06:23:35.395976       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1210 06:23:35.397094       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1210 06:23:35.397132       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 06:23:35.398448       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 06:23:35.404697       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1210 06:23:35.411993       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1210 06:23:35.413247       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1210 06:23:50.342260       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [53a10738511a27f406735f97b619a388d634bcb645e1599354d8adc9dbb656b8] <==
	I1210 06:23:36.855290       1 server_linux.go:53] "Using iptables proxy"
	I1210 06:23:36.922072       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1210 06:23:37.022655       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1210 06:23:37.022730       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1210 06:23:37.022846       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 06:23:37.048330       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1210 06:23:37.048397       1 server_linux.go:132] "Using iptables Proxier"
	I1210 06:23:37.056710       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 06:23:37.057190       1 server.go:527] "Version info" version="v1.34.2"
	I1210 06:23:37.057232       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 06:23:37.058740       1 config.go:200] "Starting service config controller"
	I1210 06:23:37.058771       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 06:23:37.058785       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 06:23:37.058795       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 06:23:37.058770       1 config.go:106] "Starting endpoint slice config controller"
	I1210 06:23:37.058817       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 06:23:37.058888       1 config.go:309] "Starting node config controller"
	I1210 06:23:37.058895       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 06:23:37.058900       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 06:23:37.159861       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1210 06:23:37.159909       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1210 06:23:37.159926       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [02170f96b451e10569d2a6a28c9b260a3be411cf18e6e89c8e53ced9a5f81787] <==
	E1210 06:23:28.416237       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1210 06:23:28.416336       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1210 06:23:28.416343       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1210 06:23:28.416459       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1210 06:23:28.416405       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1210 06:23:28.416543       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1210 06:23:28.416400       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1210 06:23:28.416737       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1210 06:23:28.416817       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1210 06:23:29.259669       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1210 06:23:29.263012       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1210 06:23:29.321709       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1210 06:23:29.369883       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1210 06:23:29.392200       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1210 06:23:29.412589       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1210 06:23:29.415964       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1210 06:23:29.419217       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1210 06:23:29.483570       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1210 06:23:29.496779       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1210 06:23:29.548586       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1210 06:23:29.659920       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1210 06:23:29.663174       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1210 06:23:29.710352       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1210 06:23:29.844273       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1210 06:23:32.710967       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 10 06:23:31 embed-certs-133470 kubelet[1324]: I1210 06:23:31.814927    1324 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-133470" podStartSLOduration=1.81490332 podStartE2EDuration="1.81490332s" podCreationTimestamp="2025-12-10 06:23:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 06:23:31.802293365 +0000 UTC m=+1.158247424" watchObservedRunningTime="2025-12-10 06:23:31.81490332 +0000 UTC m=+1.170857369"
	Dec 10 06:23:31 embed-certs-133470 kubelet[1324]: I1210 06:23:31.825317    1324 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-133470" podStartSLOduration=1.825293949 podStartE2EDuration="1.825293949s" podCreationTimestamp="2025-12-10 06:23:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 06:23:31.825174495 +0000 UTC m=+1.181128566" watchObservedRunningTime="2025-12-10 06:23:31.825293949 +0000 UTC m=+1.181247994"
	Dec 10 06:23:31 embed-certs-133470 kubelet[1324]: I1210 06:23:31.825556    1324 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-133470" podStartSLOduration=1.825542853 podStartE2EDuration="1.825542853s" podCreationTimestamp="2025-12-10 06:23:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 06:23:31.814873982 +0000 UTC m=+1.170828027" watchObservedRunningTime="2025-12-10 06:23:31.825542853 +0000 UTC m=+1.181496924"
	Dec 10 06:23:31 embed-certs-133470 kubelet[1324]: I1210 06:23:31.855378    1324 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-133470" podStartSLOduration=1.855354043 podStartE2EDuration="1.855354043s" podCreationTimestamp="2025-12-10 06:23:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 06:23:31.838093697 +0000 UTC m=+1.194047741" watchObservedRunningTime="2025-12-10 06:23:31.855354043 +0000 UTC m=+1.211308090"
	Dec 10 06:23:35 embed-certs-133470 kubelet[1324]: I1210 06:23:35.378994    1324 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 10 06:23:35 embed-certs-133470 kubelet[1324]: I1210 06:23:35.379704    1324 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 10 06:23:36 embed-certs-133470 kubelet[1324]: I1210 06:23:36.456670    1324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e4897efd-dd92-4bec-8784-0352ec933eba-lib-modules\") pod \"kube-proxy-fkdk9\" (UID: \"e4897efd-dd92-4bec-8784-0352ec933eba\") " pod="kube-system/kube-proxy-fkdk9"
	Dec 10 06:23:36 embed-certs-133470 kubelet[1324]: I1210 06:23:36.456732    1324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e4897efd-dd92-4bec-8784-0352ec933eba-xtables-lock\") pod \"kube-proxy-fkdk9\" (UID: \"e4897efd-dd92-4bec-8784-0352ec933eba\") " pod="kube-system/kube-proxy-fkdk9"
	Dec 10 06:23:36 embed-certs-133470 kubelet[1324]: I1210 06:23:36.456758    1324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/7ab9de47-d8c7-438f-892e-28d2c4fd45b8-cni-cfg\") pod \"kindnet-zhm6w\" (UID: \"7ab9de47-d8c7-438f-892e-28d2c4fd45b8\") " pod="kube-system/kindnet-zhm6w"
	Dec 10 06:23:36 embed-certs-133470 kubelet[1324]: I1210 06:23:36.456780    1324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7ab9de47-d8c7-438f-892e-28d2c4fd45b8-xtables-lock\") pod \"kindnet-zhm6w\" (UID: \"7ab9de47-d8c7-438f-892e-28d2c4fd45b8\") " pod="kube-system/kindnet-zhm6w"
	Dec 10 06:23:36 embed-certs-133470 kubelet[1324]: I1210 06:23:36.456802    1324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7ab9de47-d8c7-438f-892e-28d2c4fd45b8-lib-modules\") pod \"kindnet-zhm6w\" (UID: \"7ab9de47-d8c7-438f-892e-28d2c4fd45b8\") " pod="kube-system/kindnet-zhm6w"
	Dec 10 06:23:36 embed-certs-133470 kubelet[1324]: I1210 06:23:36.456825    1324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6vtq\" (UniqueName: \"kubernetes.io/projected/7ab9de47-d8c7-438f-892e-28d2c4fd45b8-kube-api-access-f6vtq\") pod \"kindnet-zhm6w\" (UID: \"7ab9de47-d8c7-438f-892e-28d2c4fd45b8\") " pod="kube-system/kindnet-zhm6w"
	Dec 10 06:23:36 embed-certs-133470 kubelet[1324]: I1210 06:23:36.456850    1324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e4897efd-dd92-4bec-8784-0352ec933eba-kube-proxy\") pod \"kube-proxy-fkdk9\" (UID: \"e4897efd-dd92-4bec-8784-0352ec933eba\") " pod="kube-system/kube-proxy-fkdk9"
	Dec 10 06:23:36 embed-certs-133470 kubelet[1324]: I1210 06:23:36.456871    1324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75hg6\" (UniqueName: \"kubernetes.io/projected/e4897efd-dd92-4bec-8784-0352ec933eba-kube-api-access-75hg6\") pod \"kube-proxy-fkdk9\" (UID: \"e4897efd-dd92-4bec-8784-0352ec933eba\") " pod="kube-system/kube-proxy-fkdk9"
	Dec 10 06:23:37 embed-certs-133470 kubelet[1324]: I1210 06:23:37.804771    1324 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fkdk9" podStartSLOduration=1.804741688 podStartE2EDuration="1.804741688s" podCreationTimestamp="2025-12-10 06:23:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 06:23:37.787344392 +0000 UTC m=+7.143298440" watchObservedRunningTime="2025-12-10 06:23:37.804741688 +0000 UTC m=+7.160695734"
	Dec 10 06:23:37 embed-certs-133470 kubelet[1324]: I1210 06:23:37.819746    1324 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-zhm6w" podStartSLOduration=1.81972296 podStartE2EDuration="1.81972296s" podCreationTimestamp="2025-12-10 06:23:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 06:23:37.818950567 +0000 UTC m=+7.174904613" watchObservedRunningTime="2025-12-10 06:23:37.81972296 +0000 UTC m=+7.175677006"
	Dec 10 06:23:47 embed-certs-133470 kubelet[1324]: I1210 06:23:47.771622    1324 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 10 06:23:47 embed-certs-133470 kubelet[1324]: I1210 06:23:47.844622    1324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/fc2a0a30-365d-40a5-9f1a-bc551e6beec4-tmp\") pod \"storage-provisioner\" (UID: \"fc2a0a30-365d-40a5-9f1a-bc551e6beec4\") " pod="kube-system/storage-provisioner"
	Dec 10 06:23:47 embed-certs-133470 kubelet[1324]: I1210 06:23:47.844669    1324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnz2l\" (UniqueName: \"kubernetes.io/projected/fc2a0a30-365d-40a5-9f1a-bc551e6beec4-kube-api-access-qnz2l\") pod \"storage-provisioner\" (UID: \"fc2a0a30-365d-40a5-9f1a-bc551e6beec4\") " pod="kube-system/storage-provisioner"
	Dec 10 06:23:47 embed-certs-133470 kubelet[1324]: I1210 06:23:47.844692    1324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t492n\" (UniqueName: \"kubernetes.io/projected/e735e195-23a6-4d4f-9d07-f49ed4f8e1ee-kube-api-access-t492n\") pod \"coredns-66bc5c9577-gw75x\" (UID: \"e735e195-23a6-4d4f-9d07-f49ed4f8e1ee\") " pod="kube-system/coredns-66bc5c9577-gw75x"
	Dec 10 06:23:47 embed-certs-133470 kubelet[1324]: I1210 06:23:47.844722    1324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e735e195-23a6-4d4f-9d07-f49ed4f8e1ee-config-volume\") pod \"coredns-66bc5c9577-gw75x\" (UID: \"e735e195-23a6-4d4f-9d07-f49ed4f8e1ee\") " pod="kube-system/coredns-66bc5c9577-gw75x"
	Dec 10 06:23:48 embed-certs-133470 kubelet[1324]: I1210 06:23:48.813348    1324 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-gw75x" podStartSLOduration=12.813324162 podStartE2EDuration="12.813324162s" podCreationTimestamp="2025-12-10 06:23:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 06:23:48.813055259 +0000 UTC m=+18.169009298" watchObservedRunningTime="2025-12-10 06:23:48.813324162 +0000 UTC m=+18.169278208"
	Dec 10 06:23:48 embed-certs-133470 kubelet[1324]: I1210 06:23:48.823429    1324 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.823403088 podStartE2EDuration="12.823403088s" podCreationTimestamp="2025-12-10 06:23:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 06:23:48.823317769 +0000 UTC m=+18.179271815" watchObservedRunningTime="2025-12-10 06:23:48.823403088 +0000 UTC m=+18.179357134"
	Dec 10 06:23:50 embed-certs-133470 kubelet[1324]: I1210 06:23:50.964065    1324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhxjl\" (UniqueName: \"kubernetes.io/projected/711c9c67-e967-4f68-9f76-d8694d86d75f-kube-api-access-qhxjl\") pod \"busybox\" (UID: \"711c9c67-e967-4f68-9f76-d8694d86d75f\") " pod="default/busybox"
	Dec 10 06:23:52 embed-certs-133470 kubelet[1324]: I1210 06:23:52.824336    1324 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.580571209 podStartE2EDuration="2.824312316s" podCreationTimestamp="2025-12-10 06:23:50 +0000 UTC" firstStartedPulling="2025-12-10 06:23:51.185350978 +0000 UTC m=+20.541305003" lastFinishedPulling="2025-12-10 06:23:52.429092086 +0000 UTC m=+21.785046110" observedRunningTime="2025-12-10 06:23:52.824052887 +0000 UTC m=+22.180006933" watchObservedRunningTime="2025-12-10 06:23:52.824312316 +0000 UTC m=+22.180266361"
	
	
	==> storage-provisioner [898dae7e4621ca281d4c86b6573388b1b4f0cfcd6d8b08c4e64a407b9fd48dd8] <==
	I1210 06:23:48.165995       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1210 06:23:48.175199       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1210 06:23:48.175244       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1210 06:23:48.177550       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:23:48.183277       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1210 06:23:48.183504       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1210 06:23:48.183738       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-133470_62ed43b6-6d54-48a7-b86c-15cb95b31aef!
	I1210 06:23:48.183736       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fcbf147d-e027-4c81-b883-f30651ab340b", APIVersion:"v1", ResourceVersion:"442", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-133470_62ed43b6-6d54-48a7-b86c-15cb95b31aef became leader
	W1210 06:23:48.186212       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:23:48.190421       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1210 06:23:48.284659       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-133470_62ed43b6-6d54-48a7-b86c-15cb95b31aef!
	W1210 06:23:50.194285       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:23:50.202961       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:23:52.206540       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:23:52.211446       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:23:54.214462       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:23:54.220738       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:23:56.224582       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:23:56.228916       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:23:58.233125       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:23:58.237791       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-133470 -n embed-certs-133470
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-133470 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-643991 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-643991 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (247.799231ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:24:08Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-643991 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-643991 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-643991 describe deploy/metrics-server -n kube-system: exit status 1 (60.61351ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-643991 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-643991
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-643991:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "acbf5c836807542e08b70cb1897e4c8cb6cabdd645d3167a86ed0db13940e484",
	        "Created": "2025-12-10T06:23:22.165163212Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 315953,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T06:23:22.211627753Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9dfcc37acf4d8ed51daae49d651516447e95ced4bb0b0783e8c53cb79a74f008",
	        "ResolvConfPath": "/var/lib/docker/containers/acbf5c836807542e08b70cb1897e4c8cb6cabdd645d3167a86ed0db13940e484/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/acbf5c836807542e08b70cb1897e4c8cb6cabdd645d3167a86ed0db13940e484/hostname",
	        "HostsPath": "/var/lib/docker/containers/acbf5c836807542e08b70cb1897e4c8cb6cabdd645d3167a86ed0db13940e484/hosts",
	        "LogPath": "/var/lib/docker/containers/acbf5c836807542e08b70cb1897e4c8cb6cabdd645d3167a86ed0db13940e484/acbf5c836807542e08b70cb1897e4c8cb6cabdd645d3167a86ed0db13940e484-json.log",
	        "Name": "/default-k8s-diff-port-643991",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-643991:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-643991",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "acbf5c836807542e08b70cb1897e4c8cb6cabdd645d3167a86ed0db13940e484",
	                "LowerDir": "/var/lib/docker/overlay2/cf1f161019268f5442645519aa310b9ea0a75bd69c7663b67c3505eec1791fb3-init/diff:/var/lib/docker/overlay2/5745aee6e8b05b3a4cc4ad6aee891df9d6438d830895f70bd2a764a976802708/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cf1f161019268f5442645519aa310b9ea0a75bd69c7663b67c3505eec1791fb3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cf1f161019268f5442645519aa310b9ea0a75bd69c7663b67c3505eec1791fb3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cf1f161019268f5442645519aa310b9ea0a75bd69c7663b67c3505eec1791fb3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-643991",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-643991/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-643991",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-643991",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-643991",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "099cd05218981098e756835654fe1ce62ebc995fd5e841b4aee1baef164dd812",
	            "SandboxKey": "/var/run/docker/netns/099cd0521898",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33113"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-643991": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0a24a8ad90ffaf1aa41a72e3c38eeed58406686f85b6ce46090a934c6571e421",
	                    "EndpointID": "1000abc16097b25a83f49f36eaabf887d7f176daf105a92e532bd89a79b5b894",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "9e:68:dc:52:93:84",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-643991",
	                        "acbf5c836807"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-643991 -n default-k8s-diff-port-643991
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-643991 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-643991 logs -n 25: (1.010250179s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-201263 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ ssh     │ -p bridge-201263 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ ssh     │ -p bridge-201263 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ ssh     │ -p bridge-201263 sudo cri-dockerd --version                                                                                                                                                                                                   │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ ssh     │ -p bridge-201263 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ ssh     │ -p bridge-201263 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ ssh     │ -p bridge-201263 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ ssh     │ -p bridge-201263 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ ssh     │ -p bridge-201263 sudo containerd config dump                                                                                                                                                                                                  │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ ssh     │ -p bridge-201263 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ ssh     │ -p bridge-201263 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ ssh     │ -p bridge-201263 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ ssh     │ -p bridge-201263 sudo crio config                                                                                                                                                                                                             │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ delete  │ -p bridge-201263                                                                                                                                                                                                                              │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ delete  │ -p disable-driver-mounts-998062                                                                                                                                                                                                               │ disable-driver-mounts-998062 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ start   │ -p default-k8s-diff-port-643991 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-643991 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:24 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-424086 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-424086       │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ stop    │ -p old-k8s-version-424086 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-424086       │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-424086 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-424086       │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ start   │ -p old-k8s-version-424086 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-424086       │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-713838 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-713838            │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-133470 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-133470           │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ stop    │ -p no-preload-713838 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-713838            │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ stop    │ -p embed-certs-133470 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-133470           │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-643991 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-643991 │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:23:54
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:23:54.849893  321295 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:23:54.850002  321295 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:23:54.850014  321295 out.go:374] Setting ErrFile to fd 2...
	I1210 06:23:54.850023  321295 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:23:54.850244  321295 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8832/.minikube/bin
	I1210 06:23:54.850736  321295 out.go:368] Setting JSON to false
	I1210 06:23:54.852093  321295 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3986,"bootTime":1765343849,"procs":314,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 06:23:54.852152  321295 start.go:143] virtualization: kvm guest
	I1210 06:23:54.854159  321295 out.go:179] * [old-k8s-version-424086] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 06:23:54.855481  321295 notify.go:221] Checking for updates...
	I1210 06:23:54.855500  321295 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 06:23:54.857046  321295 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:23:54.858568  321295 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-8832/kubeconfig
	I1210 06:23:54.860006  321295 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-8832/.minikube
	I1210 06:23:54.861357  321295 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 06:23:54.863080  321295 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:23:54.864997  321295 config.go:182] Loaded profile config "old-k8s-version-424086": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1210 06:23:54.867097  321295 out.go:179] * Kubernetes 1.34.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.2
	I1210 06:23:54.868348  321295 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:23:54.894028  321295 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 06:23:54.894194  321295 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:23:54.951594  321295 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-10 06:23:54.940554655 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:23:54.951691  321295 docker.go:319] overlay module found
	I1210 06:23:54.953783  321295 out.go:179] * Using the docker driver based on existing profile
	I1210 06:23:54.954954  321295 start.go:309] selected driver: docker
	I1210 06:23:54.954968  321295 start.go:927] validating driver "docker" against &{Name:old-k8s-version-424086 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-424086 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:23:54.955066  321295 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:23:54.955686  321295 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:23:55.016148  321295 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-10 06:23:55.004664726 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:23:55.016429  321295 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 06:23:55.016454  321295 cni.go:84] Creating CNI manager for ""
	I1210 06:23:55.016535  321295 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:23:55.016576  321295 start.go:353] cluster config:
	{Name:old-k8s-version-424086 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-424086 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:23:55.018624  321295 out.go:179] * Starting "old-k8s-version-424086" primary control-plane node in "old-k8s-version-424086" cluster
	I1210 06:23:55.019931  321295 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 06:23:55.021367  321295 out.go:179] * Pulling base image v0.0.48-1765319469-22089 ...
	I1210 06:23:55.022643  321295 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1210 06:23:55.022679  321295 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-8832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1210 06:23:55.022689  321295 cache.go:65] Caching tarball of preloaded images
	I1210 06:23:55.022747  321295 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon
	I1210 06:23:55.022805  321295 preload.go:238] Found /home/jenkins/minikube-integration/22089-8832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 06:23:55.022823  321295 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1210 06:23:55.022932  321295 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/old-k8s-version-424086/config.json ...
	I1210 06:23:55.045444  321295 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon, skipping pull
	I1210 06:23:55.045478  321295 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca exists in daemon, skipping load
	I1210 06:23:55.045501  321295 cache.go:243] Successfully downloaded all kic artifacts
	I1210 06:23:55.045535  321295 start.go:360] acquireMachinesLock for old-k8s-version-424086: {Name:mk21a5d7b5b879531809d880eb98ef4b6572dda2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:23:55.045600  321295 start.go:364] duration metric: took 44.502µs to acquireMachinesLock for "old-k8s-version-424086"
	I1210 06:23:55.045623  321295 start.go:96] Skipping create...Using existing machine configuration
	I1210 06:23:55.045633  321295 fix.go:54] fixHost starting: 
	I1210 06:23:55.045840  321295 cli_runner.go:164] Run: docker container inspect old-k8s-version-424086 --format={{.State.Status}}
	I1210 06:23:55.065163  321295 fix.go:112] recreateIfNeeded on old-k8s-version-424086: state=Stopped err=<nil>
	W1210 06:23:55.065245  321295 fix.go:138] unexpected machine state, will restart: <nil>
	W1210 06:23:53.247274  314350 node_ready.go:57] node "default-k8s-diff-port-643991" has "Ready":"False" status (will retry)
	W1210 06:23:55.247865  314350 node_ready.go:57] node "default-k8s-diff-port-643991" has "Ready":"False" status (will retry)
	I1210 06:23:55.067279  321295 out.go:252] * Restarting existing docker container for "old-k8s-version-424086" ...
	I1210 06:23:55.067387  321295 cli_runner.go:164] Run: docker start old-k8s-version-424086
	I1210 06:23:55.324929  321295 cli_runner.go:164] Run: docker container inspect old-k8s-version-424086 --format={{.State.Status}}
	I1210 06:23:55.344685  321295 kic.go:430] container "old-k8s-version-424086" state is running.
	I1210 06:23:55.345114  321295 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-424086
	I1210 06:23:55.365928  321295 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/old-k8s-version-424086/config.json ...
	I1210 06:23:55.366351  321295 machine.go:94] provisionDockerMachine start ...
	I1210 06:23:55.366437  321295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-424086
	I1210 06:23:55.385385  321295 main.go:143] libmachine: Using SSH client type: native
	I1210 06:23:55.385639  321295 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33114 <nil> <nil>}
	I1210 06:23:55.385653  321295 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 06:23:55.386303  321295 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54376->127.0.0.1:33114: read: connection reset by peer
	I1210 06:23:58.535442  321295 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-424086
	
	I1210 06:23:58.535487  321295 ubuntu.go:182] provisioning hostname "old-k8s-version-424086"
	I1210 06:23:58.535547  321295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-424086
	I1210 06:23:58.558303  321295 main.go:143] libmachine: Using SSH client type: native
	I1210 06:23:58.558721  321295 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33114 <nil> <nil>}
	I1210 06:23:58.558750  321295 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-424086 && echo "old-k8s-version-424086" | sudo tee /etc/hostname
	I1210 06:23:58.711450  321295 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-424086
	
	I1210 06:23:58.711541  321295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-424086
	I1210 06:23:58.735799  321295 main.go:143] libmachine: Using SSH client type: native
	I1210 06:23:58.736111  321295 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33114 <nil> <nil>}
	I1210 06:23:58.736141  321295 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-424086' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-424086/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-424086' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 06:23:58.884345  321295 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:23:58.884375  321295 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22089-8832/.minikube CaCertPath:/home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22089-8832/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22089-8832/.minikube}
	I1210 06:23:58.884412  321295 ubuntu.go:190] setting up certificates
	I1210 06:23:58.884426  321295 provision.go:84] configureAuth start
	I1210 06:23:58.884509  321295 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-424086
	I1210 06:23:58.908479  321295 provision.go:143] copyHostCerts
	I1210 06:23:58.908560  321295 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-8832/.minikube/ca.pem, removing ...
	I1210 06:23:58.908579  321295 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-8832/.minikube/ca.pem
	I1210 06:23:58.908673  321295 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22089-8832/.minikube/ca.pem (1078 bytes)
	I1210 06:23:58.908812  321295 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-8832/.minikube/cert.pem, removing ...
	I1210 06:23:58.908824  321295 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-8832/.minikube/cert.pem
	I1210 06:23:58.908866  321295 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22089-8832/.minikube/cert.pem (1123 bytes)
	I1210 06:23:58.908970  321295 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-8832/.minikube/key.pem, removing ...
	I1210 06:23:58.908992  321295 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-8832/.minikube/key.pem
	I1210 06:23:58.909032  321295 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22089-8832/.minikube/key.pem (1675 bytes)
	I1210 06:23:58.909126  321295 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22089-8832/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-424086 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-424086]
	I1210 06:23:58.980385  321295 provision.go:177] copyRemoteCerts
	I1210 06:23:58.980444  321295 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 06:23:58.980500  321295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-424086
	I1210 06:23:59.001175  321295 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/old-k8s-version-424086/id_rsa Username:docker}
	I1210 06:23:59.099882  321295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 06:23:59.121563  321295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1210 06:23:59.141459  321295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 06:23:59.163589  321295 provision.go:87] duration metric: took 279.142325ms to configureAuth
	I1210 06:23:59.163620  321295 ubuntu.go:206] setting minikube options for container-runtime
	I1210 06:23:59.163839  321295 config.go:182] Loaded profile config "old-k8s-version-424086": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1210 06:23:59.163960  321295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-424086
	I1210 06:23:59.184815  321295 main.go:143] libmachine: Using SSH client type: native
	I1210 06:23:59.185031  321295 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33114 <nil> <nil>}
	I1210 06:23:59.185055  321295 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 06:23:59.548293  321295 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 06:23:59.548318  321295 machine.go:97] duration metric: took 4.181949024s to provisionDockerMachine
	I1210 06:23:59.548332  321295 start.go:293] postStartSetup for "old-k8s-version-424086" (driver="docker")
	I1210 06:23:59.548345  321295 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 06:23:59.548407  321295 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 06:23:59.548457  321295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-424086
	I1210 06:23:59.571720  321295 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/old-k8s-version-424086/id_rsa Username:docker}
	I1210 06:23:59.669288  321295 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 06:23:59.673260  321295 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 06:23:59.673286  321295 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 06:23:59.673297  321295 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-8832/.minikube/addons for local assets ...
	I1210 06:23:59.673357  321295 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-8832/.minikube/files for local assets ...
	I1210 06:23:59.673452  321295 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-8832/.minikube/files/etc/ssl/certs/123742.pem -> 123742.pem in /etc/ssl/certs
	I1210 06:23:59.673630  321295 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 06:23:59.682086  321295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/files/etc/ssl/certs/123742.pem --> /etc/ssl/certs/123742.pem (1708 bytes)
	I1210 06:23:59.703803  321295 start.go:296] duration metric: took 155.457124ms for postStartSetup
	I1210 06:23:59.703903  321295 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:23:59.703944  321295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-424086
	I1210 06:23:59.723872  321295 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/old-k8s-version-424086/id_rsa Username:docker}
	I1210 06:23:59.818384  321295 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 06:23:59.823340  321295 fix.go:56] duration metric: took 4.777701908s for fixHost
	I1210 06:23:59.823369  321295 start.go:83] releasing machines lock for "old-k8s-version-424086", held for 4.777754974s
	I1210 06:23:59.823439  321295 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-424086
	I1210 06:23:59.844109  321295 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 06:23:59.844115  321295 ssh_runner.go:195] Run: cat /version.json
	I1210 06:23:59.844205  321295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-424086
	I1210 06:23:59.844226  321295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-424086
	W1210 06:23:57.248576  314350 node_ready.go:57] node "default-k8s-diff-port-643991" has "Ready":"False" status (will retry)
	I1210 06:23:57.748023  314350 node_ready.go:49] node "default-k8s-diff-port-643991" is "Ready"
	I1210 06:23:57.748057  314350 node_ready.go:38] duration metric: took 11.003666146s for node "default-k8s-diff-port-643991" to be "Ready" ...
	I1210 06:23:57.748074  314350 api_server.go:52] waiting for apiserver process to appear ...
	I1210 06:23:57.748129  314350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:23:57.761799  314350 api_server.go:72] duration metric: took 11.287751231s to wait for apiserver process to appear ...
	I1210 06:23:57.761825  314350 api_server.go:88] waiting for apiserver healthz status ...
	I1210 06:23:57.761847  314350 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1210 06:23:57.766164  314350 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1210 06:23:57.767173  314350 api_server.go:141] control plane version: v1.34.2
	I1210 06:23:57.767196  314350 api_server.go:131] duration metric: took 5.364885ms to wait for apiserver health ...
	I1210 06:23:57.767205  314350 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 06:23:57.771191  314350 system_pods.go:59] 8 kube-system pods found
	I1210 06:23:57.771241  314350 system_pods.go:61] "coredns-66bc5c9577-znsz6" [e151b597-32ae-4033-8ce6-fc3d9efd72b2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 06:23:57.771257  314350 system_pods.go:61] "etcd-default-k8s-diff-port-643991" [d45a67d5-7ee5-4f45-bef2-491ce1204cde] Running
	I1210 06:23:57.771270  314350 system_pods.go:61] "kindnet-7j6ns" [a757a831-3437-4844-a84f-3eb2b8d6dad5] Running
	I1210 06:23:57.771276  314350 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-643991" [3f4ebf3d-40e0-4a3b-bff1-90f5f486cab9] Running
	I1210 06:23:57.771287  314350 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-643991" [6955b6b4-7da0-4c20-8ab9-899868eca432] Running
	I1210 06:23:57.771296  314350 system_pods.go:61] "kube-proxy-mkpzc" [f4ed478e-05fc-4161-ae59-666311f1a620] Running
	I1210 06:23:57.771301  314350 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-643991" [29f8dbc9-8a3b-45f2-b54f-df593f38ab0f] Running
	I1210 06:23:57.771310  314350 system_pods.go:61] "storage-provisioner" [dc38e64c-cf9f-42d4-a886-014f884f425d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 06:23:57.771322  314350 system_pods.go:74] duration metric: took 4.110713ms to wait for pod list to return data ...
	I1210 06:23:57.771332  314350 default_sa.go:34] waiting for default service account to be created ...
	I1210 06:23:57.773545  314350 default_sa.go:45] found service account: "default"
	I1210 06:23:57.773567  314350 default_sa.go:55] duration metric: took 2.228526ms for default service account to be created ...
	I1210 06:23:57.773577  314350 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 06:23:57.776887  314350 system_pods.go:86] 8 kube-system pods found
	I1210 06:23:57.776922  314350 system_pods.go:89] "coredns-66bc5c9577-znsz6" [e151b597-32ae-4033-8ce6-fc3d9efd72b2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 06:23:57.776930  314350 system_pods.go:89] "etcd-default-k8s-diff-port-643991" [d45a67d5-7ee5-4f45-bef2-491ce1204cde] Running
	I1210 06:23:57.776940  314350 system_pods.go:89] "kindnet-7j6ns" [a757a831-3437-4844-a84f-3eb2b8d6dad5] Running
	I1210 06:23:57.776946  314350 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-643991" [3f4ebf3d-40e0-4a3b-bff1-90f5f486cab9] Running
	I1210 06:23:57.776952  314350 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-643991" [6955b6b4-7da0-4c20-8ab9-899868eca432] Running
	I1210 06:23:57.776961  314350 system_pods.go:89] "kube-proxy-mkpzc" [f4ed478e-05fc-4161-ae59-666311f1a620] Running
	I1210 06:23:57.776967  314350 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-643991" [29f8dbc9-8a3b-45f2-b54f-df593f38ab0f] Running
	I1210 06:23:57.776977  314350 system_pods.go:89] "storage-provisioner" [dc38e64c-cf9f-42d4-a886-014f884f425d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 06:23:57.777006  314350 retry.go:31] will retry after 190.656607ms: missing components: kube-dns
	I1210 06:23:57.972618  314350 system_pods.go:86] 8 kube-system pods found
	I1210 06:23:57.972654  314350 system_pods.go:89] "coredns-66bc5c9577-znsz6" [e151b597-32ae-4033-8ce6-fc3d9efd72b2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 06:23:57.972661  314350 system_pods.go:89] "etcd-default-k8s-diff-port-643991" [d45a67d5-7ee5-4f45-bef2-491ce1204cde] Running
	I1210 06:23:57.972670  314350 system_pods.go:89] "kindnet-7j6ns" [a757a831-3437-4844-a84f-3eb2b8d6dad5] Running
	I1210 06:23:57.972674  314350 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-643991" [3f4ebf3d-40e0-4a3b-bff1-90f5f486cab9] Running
	I1210 06:23:57.972680  314350 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-643991" [6955b6b4-7da0-4c20-8ab9-899868eca432] Running
	I1210 06:23:57.972685  314350 system_pods.go:89] "kube-proxy-mkpzc" [f4ed478e-05fc-4161-ae59-666311f1a620] Running
	I1210 06:23:57.972691  314350 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-643991" [29f8dbc9-8a3b-45f2-b54f-df593f38ab0f] Running
	I1210 06:23:57.972697  314350 system_pods.go:89] "storage-provisioner" [dc38e64c-cf9f-42d4-a886-014f884f425d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 06:23:57.972715  314350 retry.go:31] will retry after 357.814959ms: missing components: kube-dns
	I1210 06:23:58.335665  314350 system_pods.go:86] 8 kube-system pods found
	I1210 06:23:58.335700  314350 system_pods.go:89] "coredns-66bc5c9577-znsz6" [e151b597-32ae-4033-8ce6-fc3d9efd72b2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 06:23:58.335706  314350 system_pods.go:89] "etcd-default-k8s-diff-port-643991" [d45a67d5-7ee5-4f45-bef2-491ce1204cde] Running
	I1210 06:23:58.335716  314350 system_pods.go:89] "kindnet-7j6ns" [a757a831-3437-4844-a84f-3eb2b8d6dad5] Running
	I1210 06:23:58.335720  314350 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-643991" [3f4ebf3d-40e0-4a3b-bff1-90f5f486cab9] Running
	I1210 06:23:58.335724  314350 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-643991" [6955b6b4-7da0-4c20-8ab9-899868eca432] Running
	I1210 06:23:58.335728  314350 system_pods.go:89] "kube-proxy-mkpzc" [f4ed478e-05fc-4161-ae59-666311f1a620] Running
	I1210 06:23:58.335734  314350 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-643991" [29f8dbc9-8a3b-45f2-b54f-df593f38ab0f] Running
	I1210 06:23:58.335742  314350 system_pods.go:89] "storage-provisioner" [dc38e64c-cf9f-42d4-a886-014f884f425d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 06:23:58.335759  314350 retry.go:31] will retry after 315.510136ms: missing components: kube-dns
	I1210 06:23:58.656803  314350 system_pods.go:86] 8 kube-system pods found
	I1210 06:23:58.656832  314350 system_pods.go:89] "coredns-66bc5c9577-znsz6" [e151b597-32ae-4033-8ce6-fc3d9efd72b2] Running
	I1210 06:23:58.656838  314350 system_pods.go:89] "etcd-default-k8s-diff-port-643991" [d45a67d5-7ee5-4f45-bef2-491ce1204cde] Running
	I1210 06:23:58.656843  314350 system_pods.go:89] "kindnet-7j6ns" [a757a831-3437-4844-a84f-3eb2b8d6dad5] Running
	I1210 06:23:58.656856  314350 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-643991" [3f4ebf3d-40e0-4a3b-bff1-90f5f486cab9] Running
	I1210 06:23:58.656862  314350 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-643991" [6955b6b4-7da0-4c20-8ab9-899868eca432] Running
	I1210 06:23:58.656867  314350 system_pods.go:89] "kube-proxy-mkpzc" [f4ed478e-05fc-4161-ae59-666311f1a620] Running
	I1210 06:23:58.656871  314350 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-643991" [29f8dbc9-8a3b-45f2-b54f-df593f38ab0f] Running
	I1210 06:23:58.656875  314350 system_pods.go:89] "storage-provisioner" [dc38e64c-cf9f-42d4-a886-014f884f425d] Running
	I1210 06:23:58.656893  314350 system_pods.go:126] duration metric: took 883.30759ms to wait for k8s-apps to be running ...
	I1210 06:23:58.656904  314350 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 06:23:58.656945  314350 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:23:58.671994  314350 system_svc.go:56] duration metric: took 15.079571ms WaitForService to wait for kubelet
	I1210 06:23:58.672027  314350 kubeadm.go:587] duration metric: took 12.19797958s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 06:23:58.672049  314350 node_conditions.go:102] verifying NodePressure condition ...
	I1210 06:23:58.675797  314350 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1210 06:23:58.675831  314350 node_conditions.go:123] node cpu capacity is 8
	I1210 06:23:58.675847  314350 node_conditions.go:105] duration metric: took 3.793212ms to run NodePressure ...
	I1210 06:23:58.675862  314350 start.go:242] waiting for startup goroutines ...
	I1210 06:23:58.675871  314350 start.go:247] waiting for cluster config update ...
	I1210 06:23:58.675884  314350 start.go:256] writing updated cluster config ...
	I1210 06:23:58.676235  314350 ssh_runner.go:195] Run: rm -f paused
	I1210 06:23:58.681292  314350 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 06:23:58.685323  314350 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-znsz6" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:23:58.690035  314350 pod_ready.go:94] pod "coredns-66bc5c9577-znsz6" is "Ready"
	I1210 06:23:58.690056  314350 pod_ready.go:86] duration metric: took 4.695154ms for pod "coredns-66bc5c9577-znsz6" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:23:58.692248  314350 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-643991" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:23:58.697065  314350 pod_ready.go:94] pod "etcd-default-k8s-diff-port-643991" is "Ready"
	I1210 06:23:58.697090  314350 pod_ready.go:86] duration metric: took 4.818705ms for pod "etcd-default-k8s-diff-port-643991" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:23:58.699081  314350 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-643991" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:23:58.703970  314350 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-643991" is "Ready"
	I1210 06:23:58.703993  314350 pod_ready.go:86] duration metric: took 4.890893ms for pod "kube-apiserver-default-k8s-diff-port-643991" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:23:58.706488  314350 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-643991" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:23:59.086563  314350 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-643991" is "Ready"
	I1210 06:23:59.086590  314350 pod_ready.go:86] duration metric: took 380.074098ms for pod "kube-controller-manager-default-k8s-diff-port-643991" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:23:59.287165  314350 pod_ready.go:83] waiting for pod "kube-proxy-mkpzc" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:23:59.685960  314350 pod_ready.go:94] pod "kube-proxy-mkpzc" is "Ready"
	I1210 06:23:59.685989  314350 pod_ready.go:86] duration metric: took 398.782518ms for pod "kube-proxy-mkpzc" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:23:59.886815  314350 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-643991" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:24:00.286432  314350 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-643991" is "Ready"
	I1210 06:24:00.286460  314350 pod_ready.go:86] duration metric: took 399.618472ms for pod "kube-scheduler-default-k8s-diff-port-643991" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:24:00.286504  314350 pod_ready.go:40] duration metric: took 1.605180969s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 06:24:00.347010  314350 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1210 06:24:00.349040  314350 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-643991" cluster and "default" namespace by default
	I1210 06:23:59.864997  321295 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/old-k8s-version-424086/id_rsa Username:docker}
	I1210 06:23:59.865595  321295 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/old-k8s-version-424086/id_rsa Username:docker}
	I1210 06:24:00.015275  321295 ssh_runner.go:195] Run: systemctl --version
	I1210 06:24:00.021857  321295 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 06:24:00.056300  321295 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 06:24:00.062093  321295 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 06:24:00.062175  321295 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 06:24:00.071971  321295 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 06:24:00.071995  321295 start.go:496] detecting cgroup driver to use...
	I1210 06:24:00.072024  321295 detect.go:190] detected "systemd" cgroup driver on host os
	I1210 06:24:00.072075  321295 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 06:24:00.088935  321295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 06:24:00.105424  321295 docker.go:218] disabling cri-docker service (if available) ...
	I1210 06:24:00.105515  321295 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 06:24:00.123199  321295 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 06:24:00.138280  321295 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 06:24:00.231972  321295 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 06:24:00.335800  321295 docker.go:234] disabling docker service ...
	I1210 06:24:00.335864  321295 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 06:24:00.356968  321295 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 06:24:00.375863  321295 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 06:24:00.488328  321295 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 06:24:00.610369  321295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 06:24:00.630273  321295 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:24:00.650256  321295 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1210 06:24:00.650330  321295 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:24:00.661313  321295 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1210 06:24:00.661389  321295 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:24:00.671985  321295 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:24:00.682845  321295 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:24:00.692806  321295 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 06:24:00.702101  321295 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:24:00.711664  321295 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:24:00.720819  321295 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:24:00.730209  321295 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 06:24:00.738008  321295 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 06:24:00.745706  321295 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:24:00.823138  321295 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 06:24:00.970254  321295 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 06:24:00.970331  321295 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 06:24:00.974769  321295 start.go:564] Will wait 60s for crictl version
	I1210 06:24:00.974837  321295 ssh_runner.go:195] Run: which crictl
	I1210 06:24:00.978805  321295 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 06:24:01.004360  321295 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 06:24:01.004427  321295 ssh_runner.go:195] Run: crio --version
	I1210 06:24:01.033497  321295 ssh_runner.go:195] Run: crio --version
	I1210 06:24:01.064724  321295 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.3 ...
	I1210 06:24:01.066241  321295 cli_runner.go:164] Run: docker network inspect old-k8s-version-424086 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:24:01.085216  321295 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1210 06:24:01.089543  321295 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:24:01.100324  321295 kubeadm.go:884] updating cluster {Name:old-k8s-version-424086 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-424086 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 06:24:01.100441  321295 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1210 06:24:01.100507  321295 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:24:01.134581  321295 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 06:24:01.134606  321295 crio.go:433] Images already preloaded, skipping extraction
	I1210 06:24:01.134652  321295 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:24:01.161746  321295 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 06:24:01.161768  321295 cache_images.go:86] Images are preloaded, skipping loading
	I1210 06:24:01.161775  321295 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1210 06:24:01.161873  321295 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-424086 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-424086 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 06:24:01.161958  321295 ssh_runner.go:195] Run: crio config
	I1210 06:24:01.208788  321295 cni.go:84] Creating CNI manager for ""
	I1210 06:24:01.208809  321295 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:24:01.208823  321295 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 06:24:01.208842  321295 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-424086 NodeName:old-k8s-version-424086 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 06:24:01.208966  321295 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-424086"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 06:24:01.209024  321295 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1210 06:24:01.217424  321295 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 06:24:01.217520  321295 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 06:24:01.225721  321295 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1210 06:24:01.238941  321295 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 06:24:01.252323  321295 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I1210 06:24:01.265538  321295 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1210 06:24:01.269488  321295 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:24:01.280313  321295 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:24:01.363404  321295 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:24:01.396383  321295 certs.go:69] Setting up /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/old-k8s-version-424086 for IP: 192.168.85.2
	I1210 06:24:01.396415  321295 certs.go:195] generating shared ca certs ...
	I1210 06:24:01.396435  321295 certs.go:227] acquiring lock for ca certs: {Name:mkfe434cecfa5233603e8d01fb39a21abb4f8ca4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:24:01.396612  321295 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22089-8832/.minikube/ca.key
	I1210 06:24:01.396669  321295 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22089-8832/.minikube/proxy-client-ca.key
	I1210 06:24:01.396682  321295 certs.go:257] generating profile certs ...
	I1210 06:24:01.396789  321295 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/old-k8s-version-424086/client.key
	I1210 06:24:01.396852  321295 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/old-k8s-version-424086/apiserver.key.dd709a65
	I1210 06:24:01.396897  321295 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/old-k8s-version-424086/proxy-client.key
	I1210 06:24:01.397024  321295 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/12374.pem (1338 bytes)
	W1210 06:24:01.397067  321295 certs.go:480] ignoring /home/jenkins/minikube-integration/22089-8832/.minikube/certs/12374_empty.pem, impossibly tiny 0 bytes
	I1210 06:24:01.397080  321295 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca-key.pem (1679 bytes)
	I1210 06:24:01.397123  321295 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem (1078 bytes)
	I1210 06:24:01.397159  321295 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/cert.pem (1123 bytes)
	I1210 06:24:01.397194  321295 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/key.pem (1675 bytes)
	I1210 06:24:01.397263  321295 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/files/etc/ssl/certs/123742.pem (1708 bytes)
	I1210 06:24:01.397904  321295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 06:24:01.418800  321295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 06:24:01.438517  321295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 06:24:01.458974  321295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 06:24:01.483285  321295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/old-k8s-version-424086/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1210 06:24:01.504457  321295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/old-k8s-version-424086/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 06:24:01.523377  321295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/old-k8s-version-424086/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 06:24:01.542672  321295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/old-k8s-version-424086/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 06:24:01.562173  321295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/certs/12374.pem --> /usr/share/ca-certificates/12374.pem (1338 bytes)
	I1210 06:24:01.581554  321295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/files/etc/ssl/certs/123742.pem --> /usr/share/ca-certificates/123742.pem (1708 bytes)
	I1210 06:24:01.603442  321295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 06:24:01.624154  321295 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 06:24:01.638294  321295 ssh_runner.go:195] Run: openssl version
	I1210 06:24:01.645853  321295 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12374.pem
	I1210 06:24:01.654941  321295 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12374.pem /etc/ssl/certs/12374.pem
	I1210 06:24:01.663441  321295 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12374.pem
	I1210 06:24:01.667716  321295 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:52 /usr/share/ca-certificates/12374.pem
	I1210 06:24:01.667771  321295 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12374.pem
	I1210 06:24:01.708113  321295 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 06:24:01.716640  321295 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/123742.pem
	I1210 06:24:01.724616  321295 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/123742.pem /etc/ssl/certs/123742.pem
	I1210 06:24:01.733209  321295 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/123742.pem
	I1210 06:24:01.737267  321295 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:52 /usr/share/ca-certificates/123742.pem
	I1210 06:24:01.737332  321295 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/123742.pem
	I1210 06:24:01.778505  321295 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 06:24:01.787013  321295 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:24:01.795421  321295 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 06:24:01.804087  321295 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:24:01.808564  321295 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:24:01.808619  321295 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:24:01.847520  321295 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 06:24:01.857166  321295 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:24:01.862188  321295 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 06:24:01.903817  321295 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 06:24:01.955749  321295 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 06:24:02.009388  321295 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 06:24:02.066530  321295 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 06:24:02.123679  321295 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 06:24:02.166962  321295 kubeadm.go:401] StartCluster: {Name:old-k8s-version-424086 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-424086 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:24:02.167060  321295 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 06:24:02.167149  321295 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:24:02.203312  321295 cri.go:89] found id: "93a32a0fa3cab7bf6ae2839ea587c0222d752c39fb0442b5594fc8fb840385c5"
	I1210 06:24:02.203340  321295 cri.go:89] found id: "99a520617b27091388284c36bef3465458e40aa0ab841df386ee409f39ccbee2"
	I1210 06:24:02.203346  321295 cri.go:89] found id: "70b526ae1f4ce1d3bdeff2ca86e39c33688d70edf03a257a1b0eeda29e7059a9"
	I1210 06:24:02.203354  321295 cri.go:89] found id: "eca25d4da655329c0f900bc2d9a38df2f8b3abd27a1fb23973129f968c2ffbea"
	I1210 06:24:02.203364  321295 cri.go:89] found id: ""
	I1210 06:24:02.203418  321295 ssh_runner.go:195] Run: sudo runc list -f json
	W1210 06:24:02.219009  321295 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:24:02Z" level=error msg="open /run/runc: no such file or directory"
	I1210 06:24:02.219098  321295 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 06:24:02.228780  321295 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 06:24:02.228801  321295 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 06:24:02.228852  321295 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 06:24:02.236905  321295 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:24:02.238303  321295 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-424086" does not appear in /home/jenkins/minikube-integration/22089-8832/kubeconfig
	I1210 06:24:02.239324  321295 kubeconfig.go:62] /home/jenkins/minikube-integration/22089-8832/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-424086" cluster setting kubeconfig missing "old-k8s-version-424086" context setting]
	I1210 06:24:02.240745  321295 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/kubeconfig: {Name:mk2d0febd8c6a30a71f02d20e2057fd6d147cd76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:24:02.243057  321295 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 06:24:02.251750  321295 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1210 06:24:02.251787  321295 kubeadm.go:602] duration metric: took 22.978383ms to restartPrimaryControlPlane
	I1210 06:24:02.251798  321295 kubeadm.go:403] duration metric: took 84.846558ms to StartCluster
	I1210 06:24:02.251817  321295 settings.go:142] acquiring lock: {Name:mkcfa52e2e09cf8266d26c2d1d1f162454a79515 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:24:02.251888  321295 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22089-8832/kubeconfig
	I1210 06:24:02.254539  321295 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/kubeconfig: {Name:mk2d0febd8c6a30a71f02d20e2057fd6d147cd76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:24:02.254842  321295 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 06:24:02.254909  321295 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 06:24:02.255015  321295 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-424086"
	I1210 06:24:02.255038  321295 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-424086"
	W1210 06:24:02.255046  321295 addons.go:248] addon storage-provisioner should already be in state true
	I1210 06:24:02.255045  321295 addons.go:70] Setting dashboard=true in profile "old-k8s-version-424086"
	I1210 06:24:02.255073  321295 addons.go:239] Setting addon dashboard=true in "old-k8s-version-424086"
	I1210 06:24:02.255076  321295 host.go:66] Checking if "old-k8s-version-424086" exists ...
	W1210 06:24:02.255083  321295 addons.go:248] addon dashboard should already be in state true
	I1210 06:24:02.255100  321295 config.go:182] Loaded profile config "old-k8s-version-424086": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1210 06:24:02.255112  321295 host.go:66] Checking if "old-k8s-version-424086" exists ...
	I1210 06:24:02.255147  321295 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-424086"
	I1210 06:24:02.255156  321295 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-424086"
	I1210 06:24:02.255346  321295 cli_runner.go:164] Run: docker container inspect old-k8s-version-424086 --format={{.State.Status}}
	I1210 06:24:02.255595  321295 cli_runner.go:164] Run: docker container inspect old-k8s-version-424086 --format={{.State.Status}}
	I1210 06:24:02.255622  321295 cli_runner.go:164] Run: docker container inspect old-k8s-version-424086 --format={{.State.Status}}
	I1210 06:24:02.258601  321295 out.go:179] * Verifying Kubernetes components...
	I1210 06:24:02.260070  321295 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:24:02.283272  321295 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-424086"
	W1210 06:24:02.283295  321295 addons.go:248] addon default-storageclass should already be in state true
	I1210 06:24:02.283321  321295 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:24:02.283348  321295 host.go:66] Checking if "old-k8s-version-424086" exists ...
	I1210 06:24:02.283336  321295 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1210 06:24:02.283862  321295 cli_runner.go:164] Run: docker container inspect old-k8s-version-424086 --format={{.State.Status}}
	I1210 06:24:02.285329  321295 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:24:02.285372  321295 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 06:24:02.285429  321295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-424086
	I1210 06:24:02.286686  321295 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1210 06:24:02.287983  321295 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1210 06:24:02.288007  321295 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1210 06:24:02.288063  321295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-424086
	I1210 06:24:02.315919  321295 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 06:24:02.316011  321295 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 06:24:02.316090  321295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-424086
	I1210 06:24:02.325549  321295 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/old-k8s-version-424086/id_rsa Username:docker}
	I1210 06:24:02.328223  321295 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/old-k8s-version-424086/id_rsa Username:docker}
	I1210 06:24:02.346728  321295 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/old-k8s-version-424086/id_rsa Username:docker}
	I1210 06:24:02.425504  321295 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:24:02.442175  321295 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-424086" to be "Ready" ...
	I1210 06:24:02.445598  321295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:24:02.445738  321295 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1210 06:24:02.445759  321295 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1210 06:24:02.461817  321295 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1210 06:24:02.461865  321295 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1210 06:24:02.464655  321295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:24:02.479040  321295 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1210 06:24:02.479065  321295 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1210 06:24:02.498876  321295 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1210 06:24:02.498900  321295 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1210 06:24:02.518724  321295 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1210 06:24:02.518750  321295 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1210 06:24:02.535696  321295 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1210 06:24:02.535722  321295 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1210 06:24:02.551641  321295 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1210 06:24:02.551675  321295 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1210 06:24:02.567555  321295 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1210 06:24:02.567583  321295 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1210 06:24:02.583506  321295 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 06:24:02.583535  321295 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1210 06:24:02.598252  321295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 06:24:04.078091  321295 node_ready.go:49] node "old-k8s-version-424086" is "Ready"
	I1210 06:24:04.078124  321295 node_ready.go:38] duration metric: took 1.635884586s for node "old-k8s-version-424086" to be "Ready" ...
	I1210 06:24:04.078140  321295 api_server.go:52] waiting for apiserver process to appear ...
	I1210 06:24:04.078195  321295 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:24:04.762388  321295 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.316709537s)
	I1210 06:24:04.762449  321295 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.297761743s)
	I1210 06:24:05.144162  321295 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.545859813s)
	I1210 06:24:05.144219  321295 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.066004861s)
	I1210 06:24:05.144249  321295 api_server.go:72] duration metric: took 2.889375724s to wait for apiserver process to appear ...
	I1210 06:24:05.144311  321295 api_server.go:88] waiting for apiserver healthz status ...
	I1210 06:24:05.144332  321295 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1210 06:24:05.147986  321295 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-424086 addons enable metrics-server
	
	I1210 06:24:05.149807  321295 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1210 06:24:05.149833  321295 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1210 06:24:05.151632  321295 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	
	
	==> CRI-O <==
	Dec 10 06:23:58 default-k8s-diff-port-643991 crio[770]: time="2025-12-10T06:23:58.040692338Z" level=info msg="Starting container: 7c540488971939afe3a58aa13c7f5e40a403068a6bb23cdbeab83d302654cea1" id=dd2652e3-3a0a-42fe-9909-68559e0f6b0f name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 06:23:58 default-k8s-diff-port-643991 crio[770]: time="2025-12-10T06:23:58.043053876Z" level=info msg="Started container" PID=1867 containerID=7c540488971939afe3a58aa13c7f5e40a403068a6bb23cdbeab83d302654cea1 description=kube-system/coredns-66bc5c9577-znsz6/coredns id=dd2652e3-3a0a-42fe-9909-68559e0f6b0f name=/runtime.v1.RuntimeService/StartContainer sandboxID=320650b45fa4e3bc2f052b185aba624d8a086396245e25a515a8e6bc200fdcbd
	Dec 10 06:24:00 default-k8s-diff-port-643991 crio[770]: time="2025-12-10T06:24:00.878752045Z" level=info msg="Running pod sandbox: default/busybox/POD" id=b7ded110-d712-4d99-a090-c9ba4d3a5bcf name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 06:24:00 default-k8s-diff-port-643991 crio[770]: time="2025-12-10T06:24:00.878818616Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:24:00 default-k8s-diff-port-643991 crio[770]: time="2025-12-10T06:24:00.884340675Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:c746879092d51f45f42e206463f83c32e080dfbc4edf39fd30a2d6ee2017bab6 UID:0d90f3d0-1378-4217-b9cc-2116a1d1dbbb NetNS:/var/run/netns/21c37821-b189-479c-a634-e3a662c0db42 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000500530}] Aliases:map[]}"
	Dec 10 06:24:00 default-k8s-diff-port-643991 crio[770]: time="2025-12-10T06:24:00.88436958Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 10 06:24:00 default-k8s-diff-port-643991 crio[770]: time="2025-12-10T06:24:00.893940394Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:c746879092d51f45f42e206463f83c32e080dfbc4edf39fd30a2d6ee2017bab6 UID:0d90f3d0-1378-4217-b9cc-2116a1d1dbbb NetNS:/var/run/netns/21c37821-b189-479c-a634-e3a662c0db42 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000500530}] Aliases:map[]}"
	Dec 10 06:24:00 default-k8s-diff-port-643991 crio[770]: time="2025-12-10T06:24:00.894060448Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 10 06:24:00 default-k8s-diff-port-643991 crio[770]: time="2025-12-10T06:24:00.894814392Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 10 06:24:00 default-k8s-diff-port-643991 crio[770]: time="2025-12-10T06:24:00.895665437Z" level=info msg="Ran pod sandbox c746879092d51f45f42e206463f83c32e080dfbc4edf39fd30a2d6ee2017bab6 with infra container: default/busybox/POD" id=b7ded110-d712-4d99-a090-c9ba4d3a5bcf name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 06:24:00 default-k8s-diff-port-643991 crio[770]: time="2025-12-10T06:24:00.896934029Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=24a8beb0-361c-4557-b1d9-1fbf1b7bd62f name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:24:00 default-k8s-diff-port-643991 crio[770]: time="2025-12-10T06:24:00.897056902Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=24a8beb0-361c-4557-b1d9-1fbf1b7bd62f name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:24:00 default-k8s-diff-port-643991 crio[770]: time="2025-12-10T06:24:00.897113946Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=24a8beb0-361c-4557-b1d9-1fbf1b7bd62f name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:24:00 default-k8s-diff-port-643991 crio[770]: time="2025-12-10T06:24:00.897973863Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=8744eab6-8ffb-46f2-9ce3-ce68c65eae18 name=/runtime.v1.ImageService/PullImage
	Dec 10 06:24:00 default-k8s-diff-port-643991 crio[770]: time="2025-12-10T06:24:00.900128577Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 10 06:24:02 default-k8s-diff-port-643991 crio[770]: time="2025-12-10T06:24:02.15745729Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=8744eab6-8ffb-46f2-9ce3-ce68c65eae18 name=/runtime.v1.ImageService/PullImage
	Dec 10 06:24:02 default-k8s-diff-port-643991 crio[770]: time="2025-12-10T06:24:02.16075299Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=42f8eb50-5591-4efc-9576-d013afbdd14e name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:24:02 default-k8s-diff-port-643991 crio[770]: time="2025-12-10T06:24:02.162383944Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=8b661810-a82c-4a20-aa39-2c4a23e659e7 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:24:02 default-k8s-diff-port-643991 crio[770]: time="2025-12-10T06:24:02.166568024Z" level=info msg="Creating container: default/busybox/busybox" id=21900247-9950-4926-9874-e999d0aea074 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:24:02 default-k8s-diff-port-643991 crio[770]: time="2025-12-10T06:24:02.16670666Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:24:02 default-k8s-diff-port-643991 crio[770]: time="2025-12-10T06:24:02.171375547Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:24:02 default-k8s-diff-port-643991 crio[770]: time="2025-12-10T06:24:02.171946373Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:24:02 default-k8s-diff-port-643991 crio[770]: time="2025-12-10T06:24:02.213779231Z" level=info msg="Created container 03006a0c39be969870774f5006014dc05d09ec7d11357cb637aa0d2c223db304: default/busybox/busybox" id=21900247-9950-4926-9874-e999d0aea074 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:24:02 default-k8s-diff-port-643991 crio[770]: time="2025-12-10T06:24:02.214627978Z" level=info msg="Starting container: 03006a0c39be969870774f5006014dc05d09ec7d11357cb637aa0d2c223db304" id=b7784c7d-db74-4e73-8ead-c726f0706a04 name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 06:24:02 default-k8s-diff-port-643991 crio[770]: time="2025-12-10T06:24:02.217668483Z" level=info msg="Started container" PID=1942 containerID=03006a0c39be969870774f5006014dc05d09ec7d11357cb637aa0d2c223db304 description=default/busybox/busybox id=b7784c7d-db74-4e73-8ead-c726f0706a04 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c746879092d51f45f42e206463f83c32e080dfbc4edf39fd30a2d6ee2017bab6
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	03006a0c39be9       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   c746879092d51       busybox                                                default
	7c54048897193       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 seconds ago      Running             coredns                   0                   320650b45fa4e       coredns-66bc5c9577-znsz6                               kube-system
	a0eeabf08487d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 seconds ago      Running             storage-provisioner       0                   a6ea7271b6238       storage-provisioner                                    kube-system
	22532f83924b2       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      23 seconds ago      Running             kindnet-cni               0                   76760d80dcddb       kindnet-7j6ns                                          kube-system
	d06c4885ee139       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                      23 seconds ago      Running             kube-proxy                0                   28ffb0d2a41e6       kube-proxy-mkpzc                                       kube-system
	b96d61a365a59       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                      34 seconds ago      Running             etcd                      0                   7ba835559ed6f       etcd-default-k8s-diff-port-643991                      kube-system
	e937271d5100c       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                      34 seconds ago      Running             kube-scheduler            0                   e6e86fa0d54c8       kube-scheduler-default-k8s-diff-port-643991            kube-system
	58b80537992b0       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                      34 seconds ago      Running             kube-apiserver            0                   05760306d9210       kube-apiserver-default-k8s-diff-port-643991            kube-system
	ab4014e022046       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                      34 seconds ago      Running             kube-controller-manager   0                   50c7dfa580b34       kube-controller-manager-default-k8s-diff-port-643991   kube-system
	
	
	==> coredns [7c540488971939afe3a58aa13c7f5e40a403068a6bb23cdbeab83d302654cea1] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54841 - 65364 "HINFO IN 1507997122871757791.6658389204828268322. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.030544158s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-643991
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-643991
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=edc6abd3c0573b88c7a02dc35aa0b985627fa3e9
	                    minikube.k8s.io/name=default-k8s-diff-port-643991
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T06_23_41_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 06:23:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-643991
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 06:24:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 06:23:57 +0000   Wed, 10 Dec 2025 06:23:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 06:23:57 +0000   Wed, 10 Dec 2025 06:23:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 06:23:57 +0000   Wed, 10 Dec 2025 06:23:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Dec 2025 06:23:57 +0000   Wed, 10 Dec 2025 06:23:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-643991
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 0992b7e47f4f804d2f02c3066938a460
	  System UUID:                dd07e36d-8369-41a9-8fa1-68f38e5abb55
	  Boot ID:                    cce7104c-1270-4b6b-af66-b04ce0de633c
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-znsz6                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     24s
	  kube-system                 etcd-default-k8s-diff-port-643991                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-7j6ns                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-default-k8s-diff-port-643991             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-643991    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-mkpzc                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-default-k8s-diff-port-643991             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 22s   kube-proxy       
	  Normal  Starting                 30s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s   kubelet          Node default-k8s-diff-port-643991 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s   kubelet          Node default-k8s-diff-port-643991 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s   kubelet          Node default-k8s-diff-port-643991 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s   node-controller  Node default-k8s-diff-port-643991 event: Registered Node default-k8s-diff-port-643991 in Controller
	  Normal  NodeReady                13s   kubelet          Node default-k8s-diff-port-643991 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 67 21 4d 5d 12 08 06
	[Dec10 06:21] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 7e b1 cc cb 4a c1 08 06
	[  +0.000451] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f6 67 21 4d 5d 12 08 06
	[ +47.984386] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 4e cf 84 e2 64 08 06
	[  +1.136322] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000001] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0e cf a5 c8 c4 7c 08 06
	[  +0.000001] ll header: 00000000: ff ff ff ff ff ff 4e 6d f9 62 31 99 08 06
	[Dec10 06:22] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 16 a6 95 b8 c2 8a 08 06
	[ +10.598490] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 42 35 90 e5 6e e9 08 06
	[  +0.000401] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 36 4e cf 84 e2 64 08 06
	[ +28.872835] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a 53 b5 51 38 03 08 06
	[  +0.000413] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4e 6d f9 62 31 99 08 06
	[  +9.820727] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6e c5 0b 85 ba 10 08 06
	[  +0.000485] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 16 a6 95 b8 c2 8a 08 06
	
	
	==> etcd [b96d61a365a59f14483f55800b4e3248a0424f224ce589117ec86c85f1dd7485] <==
	{"level":"warn","ts":"2025-12-10T06:23:37.229656Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:23:37.237970Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:23:37.247991Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:23:37.256081Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:23:37.264080Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:23:37.276489Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:23:37.283058Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:23:37.291314Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:23:37.300850Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:23:37.309700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:23:37.319343Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:23:37.327519Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:23:37.335916Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:23:37.344841Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:23:37.352578Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:23:37.368334Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:23:37.378744Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:23:37.386188Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:23:37.398331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:23:37.407305Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:23:37.422763Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:23:37.426694Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:23:37.434510Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:23:37.442958Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:23:37.497148Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50408","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 06:24:10 up  1:06,  0 user,  load average: 7.15, 5.16, 3.00
	Linux default-k8s-diff-port-643991 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [22532f83924b20c0efffef872cfc5c9ddcd26c895c1271376925615454c344df] <==
	I1210 06:23:47.038549       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1210 06:23:47.038830       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1210 06:23:47.038999       1 main.go:148] setting mtu 1500 for CNI 
	I1210 06:23:47.039016       1 main.go:178] kindnetd IP family: "ipv4"
	I1210 06:23:47.039037       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-10T06:23:47Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1210 06:23:47.334756       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1210 06:23:47.334793       1 controller.go:381] "Waiting for informer caches to sync"
	I1210 06:23:47.334809       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1210 06:23:47.335015       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1210 06:23:47.835153       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1210 06:23:47.835183       1 metrics.go:72] Registering metrics
	I1210 06:23:47.835234       1 controller.go:711] "Syncing nftables rules"
	I1210 06:23:57.242020       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1210 06:23:57.242101       1 main.go:301] handling current node
	I1210 06:24:07.240492       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1210 06:24:07.240530       1 main.go:301] handling current node
	
	
	==> kube-apiserver [58b80537992b0b4bc6cc362be687794bb60c29ef1126aad516ef4c333a085851] <==
	E1210 06:23:38.119725       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1210 06:23:38.135568       1 controller.go:667] quota admission added evaluator for: namespaces
	I1210 06:23:38.142871       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1210 06:23:38.143136       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 06:23:38.148718       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 06:23:38.148797       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1210 06:23:38.322854       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1210 06:23:38.938407       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1210 06:23:38.942904       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1210 06:23:38.942921       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1210 06:23:39.448043       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1210 06:23:39.488980       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1210 06:23:39.542870       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1210 06:23:39.549076       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1210 06:23:39.550416       1 controller.go:667] quota admission added evaluator for: endpoints
	I1210 06:23:39.555238       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1210 06:23:39.995508       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1210 06:23:40.485029       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1210 06:23:40.495243       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1210 06:23:40.503949       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1210 06:23:44.999807       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 06:23:45.005946       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 06:23:45.746982       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1210 06:23:45.796928       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1210 06:24:08.657867       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8444->192.168.76.1:60636: use of closed network connection
	
	
	==> kube-controller-manager [ab4014e0220460ae9b16da0d300c90863a27f8d82a307a8045825cc31d479983] <==
	I1210 06:23:44.994318       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1210 06:23:44.994356       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1210 06:23:44.994440       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1210 06:23:44.994565       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1210 06:23:44.995432       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1210 06:23:44.995486       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1210 06:23:44.995502       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1210 06:23:44.995570       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1210 06:23:44.995581       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1210 06:23:44.995571       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1210 06:23:44.995582       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1210 06:23:44.995692       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1210 06:23:44.998912       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1210 06:23:44.998979       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1210 06:23:44.999009       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1210 06:23:44.999015       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1210 06:23:44.999021       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1210 06:23:45.000253       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 06:23:45.000985       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1210 06:23:45.002176       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1210 06:23:45.008283       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-643991" podCIDRs=["10.244.0.0/24"]
	I1210 06:23:45.015506       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 06:23:45.018900       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1210 06:23:45.019998       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1210 06:23:59.946393       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [d06c4885ee1395e0d3df86a796d4a99bae2fd918a98a20509c5541a5fc7a96bf] <==
	I1210 06:23:46.845109       1 server_linux.go:53] "Using iptables proxy"
	I1210 06:23:46.928040       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1210 06:23:47.029049       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1210 06:23:47.029093       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1210 06:23:47.029206       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 06:23:47.048706       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1210 06:23:47.048786       1 server_linux.go:132] "Using iptables Proxier"
	I1210 06:23:47.054625       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 06:23:47.055072       1 server.go:527] "Version info" version="v1.34.2"
	I1210 06:23:47.055120       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 06:23:47.056770       1 config.go:200] "Starting service config controller"
	I1210 06:23:47.056791       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 06:23:47.056824       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 06:23:47.056830       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 06:23:47.056836       1 config.go:106] "Starting endpoint slice config controller"
	I1210 06:23:47.056858       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 06:23:47.057625       1 config.go:309] "Starting node config controller"
	I1210 06:23:47.057784       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 06:23:47.057798       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 06:23:47.156973       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1210 06:23:47.157001       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1210 06:23:47.157917       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [e937271d5100c42466c5baa85893d891ceaeb1e59b971355934907a46b861fb6] <==
	E1210 06:23:38.025001       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1210 06:23:38.025075       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1210 06:23:38.025492       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1210 06:23:38.025521       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1210 06:23:38.025541       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1210 06:23:38.025568       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1210 06:23:38.025620       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1210 06:23:38.026018       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1210 06:23:38.026171       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1210 06:23:38.026462       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1210 06:23:38.026583       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1210 06:23:38.026669       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1210 06:23:38.835579       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1210 06:23:38.948058       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1210 06:23:38.971369       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1210 06:23:38.994740       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1210 06:23:39.050607       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1210 06:23:39.056623       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1210 06:23:39.068858       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1210 06:23:39.079161       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1210 06:23:39.210689       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1210 06:23:39.233043       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1210 06:23:39.242084       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1210 06:23:39.320669       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1210 06:23:41.817128       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 10 06:23:45 default-k8s-diff-port-643991 kubelet[1325]: I1210 06:23:45.053620    1325 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 10 06:23:45 default-k8s-diff-port-643991 kubelet[1325]: I1210 06:23:45.843794    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcj82\" (UniqueName: \"kubernetes.io/projected/f4ed478e-05fc-4161-ae59-666311f1a620-kube-api-access-hcj82\") pod \"kube-proxy-mkpzc\" (UID: \"f4ed478e-05fc-4161-ae59-666311f1a620\") " pod="kube-system/kube-proxy-mkpzc"
	Dec 10 06:23:45 default-k8s-diff-port-643991 kubelet[1325]: I1210 06:23:45.843852    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a757a831-3437-4844-a84f-3eb2b8d6dad5-xtables-lock\") pod \"kindnet-7j6ns\" (UID: \"a757a831-3437-4844-a84f-3eb2b8d6dad5\") " pod="kube-system/kindnet-7j6ns"
	Dec 10 06:23:45 default-k8s-diff-port-643991 kubelet[1325]: I1210 06:23:45.843881    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8dns\" (UniqueName: \"kubernetes.io/projected/a757a831-3437-4844-a84f-3eb2b8d6dad5-kube-api-access-s8dns\") pod \"kindnet-7j6ns\" (UID: \"a757a831-3437-4844-a84f-3eb2b8d6dad5\") " pod="kube-system/kindnet-7j6ns"
	Dec 10 06:23:45 default-k8s-diff-port-643991 kubelet[1325]: I1210 06:23:45.843903    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a757a831-3437-4844-a84f-3eb2b8d6dad5-lib-modules\") pod \"kindnet-7j6ns\" (UID: \"a757a831-3437-4844-a84f-3eb2b8d6dad5\") " pod="kube-system/kindnet-7j6ns"
	Dec 10 06:23:45 default-k8s-diff-port-643991 kubelet[1325]: I1210 06:23:45.843965    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f4ed478e-05fc-4161-ae59-666311f1a620-kube-proxy\") pod \"kube-proxy-mkpzc\" (UID: \"f4ed478e-05fc-4161-ae59-666311f1a620\") " pod="kube-system/kube-proxy-mkpzc"
	Dec 10 06:23:45 default-k8s-diff-port-643991 kubelet[1325]: I1210 06:23:45.844079    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f4ed478e-05fc-4161-ae59-666311f1a620-xtables-lock\") pod \"kube-proxy-mkpzc\" (UID: \"f4ed478e-05fc-4161-ae59-666311f1a620\") " pod="kube-system/kube-proxy-mkpzc"
	Dec 10 06:23:45 default-k8s-diff-port-643991 kubelet[1325]: I1210 06:23:45.844131    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f4ed478e-05fc-4161-ae59-666311f1a620-lib-modules\") pod \"kube-proxy-mkpzc\" (UID: \"f4ed478e-05fc-4161-ae59-666311f1a620\") " pod="kube-system/kube-proxy-mkpzc"
	Dec 10 06:23:45 default-k8s-diff-port-643991 kubelet[1325]: I1210 06:23:45.844177    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/a757a831-3437-4844-a84f-3eb2b8d6dad5-cni-cfg\") pod \"kindnet-7j6ns\" (UID: \"a757a831-3437-4844-a84f-3eb2b8d6dad5\") " pod="kube-system/kindnet-7j6ns"
	Dec 10 06:23:45 default-k8s-diff-port-643991 kubelet[1325]: E1210 06:23:45.952976    1325 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Dec 10 06:23:45 default-k8s-diff-port-643991 kubelet[1325]: E1210 06:23:45.953019    1325 projected.go:196] Error preparing data for projected volume kube-api-access-hcj82 for pod kube-system/kube-proxy-mkpzc: configmap "kube-root-ca.crt" not found
	Dec 10 06:23:45 default-k8s-diff-port-643991 kubelet[1325]: E1210 06:23:45.953171    1325 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f4ed478e-05fc-4161-ae59-666311f1a620-kube-api-access-hcj82 podName:f4ed478e-05fc-4161-ae59-666311f1a620 nodeName:}" failed. No retries permitted until 2025-12-10 06:23:46.4531259 +0000 UTC m=+6.223855240 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hcj82" (UniqueName: "kubernetes.io/projected/f4ed478e-05fc-4161-ae59-666311f1a620-kube-api-access-hcj82") pod "kube-proxy-mkpzc" (UID: "f4ed478e-05fc-4161-ae59-666311f1a620") : configmap "kube-root-ca.crt" not found
	Dec 10 06:23:45 default-k8s-diff-port-643991 kubelet[1325]: E1210 06:23:45.953401    1325 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Dec 10 06:23:45 default-k8s-diff-port-643991 kubelet[1325]: E1210 06:23:45.953439    1325 projected.go:196] Error preparing data for projected volume kube-api-access-s8dns for pod kube-system/kindnet-7j6ns: configmap "kube-root-ca.crt" not found
	Dec 10 06:23:45 default-k8s-diff-port-643991 kubelet[1325]: E1210 06:23:45.953537    1325 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a757a831-3437-4844-a84f-3eb2b8d6dad5-kube-api-access-s8dns podName:a757a831-3437-4844-a84f-3eb2b8d6dad5 nodeName:}" failed. No retries permitted until 2025-12-10 06:23:46.453510176 +0000 UTC m=+6.224239502 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s8dns" (UniqueName: "kubernetes.io/projected/a757a831-3437-4844-a84f-3eb2b8d6dad5-kube-api-access-s8dns") pod "kindnet-7j6ns" (UID: "a757a831-3437-4844-a84f-3eb2b8d6dad5") : configmap "kube-root-ca.crt" not found
	Dec 10 06:23:47 default-k8s-diff-port-643991 kubelet[1325]: I1210 06:23:47.359890    1325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-7j6ns" podStartSLOduration=2.359867172 podStartE2EDuration="2.359867172s" podCreationTimestamp="2025-12-10 06:23:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 06:23:47.359795909 +0000 UTC m=+7.130525245" watchObservedRunningTime="2025-12-10 06:23:47.359867172 +0000 UTC m=+7.130596513"
	Dec 10 06:23:49 default-k8s-diff-port-643991 kubelet[1325]: I1210 06:23:49.648729    1325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mkpzc" podStartSLOduration=4.648705379 podStartE2EDuration="4.648705379s" podCreationTimestamp="2025-12-10 06:23:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 06:23:47.377693843 +0000 UTC m=+7.148423185" watchObservedRunningTime="2025-12-10 06:23:49.648705379 +0000 UTC m=+9.419434720"
	Dec 10 06:23:57 default-k8s-diff-port-643991 kubelet[1325]: I1210 06:23:57.635126    1325 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 10 06:23:57 default-k8s-diff-port-643991 kubelet[1325]: I1210 06:23:57.732787    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m45vx\" (UniqueName: \"kubernetes.io/projected/e151b597-32ae-4033-8ce6-fc3d9efd72b2-kube-api-access-m45vx\") pod \"coredns-66bc5c9577-znsz6\" (UID: \"e151b597-32ae-4033-8ce6-fc3d9efd72b2\") " pod="kube-system/coredns-66bc5c9577-znsz6"
	Dec 10 06:23:57 default-k8s-diff-port-643991 kubelet[1325]: I1210 06:23:57.732833    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e151b597-32ae-4033-8ce6-fc3d9efd72b2-config-volume\") pod \"coredns-66bc5c9577-znsz6\" (UID: \"e151b597-32ae-4033-8ce6-fc3d9efd72b2\") " pod="kube-system/coredns-66bc5c9577-znsz6"
	Dec 10 06:23:57 default-k8s-diff-port-643991 kubelet[1325]: I1210 06:23:57.732858    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/dc38e64c-cf9f-42d4-a886-014f884f425d-tmp\") pod \"storage-provisioner\" (UID: \"dc38e64c-cf9f-42d4-a886-014f884f425d\") " pod="kube-system/storage-provisioner"
	Dec 10 06:23:57 default-k8s-diff-port-643991 kubelet[1325]: I1210 06:23:57.732872    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ls4r\" (UniqueName: \"kubernetes.io/projected/dc38e64c-cf9f-42d4-a886-014f884f425d-kube-api-access-2ls4r\") pod \"storage-provisioner\" (UID: \"dc38e64c-cf9f-42d4-a886-014f884f425d\") " pod="kube-system/storage-provisioner"
	Dec 10 06:23:58 default-k8s-diff-port-643991 kubelet[1325]: I1210 06:23:58.388946    1325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.388921402 podStartE2EDuration="12.388921402s" podCreationTimestamp="2025-12-10 06:23:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 06:23:58.388591262 +0000 UTC m=+18.159320603" watchObservedRunningTime="2025-12-10 06:23:58.388921402 +0000 UTC m=+18.159650755"
	Dec 10 06:24:00 default-k8s-diff-port-643991 kubelet[1325]: I1210 06:24:00.570006    1325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-znsz6" podStartSLOduration=14.569977868 podStartE2EDuration="14.569977868s" podCreationTimestamp="2025-12-10 06:23:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 06:23:58.405449486 +0000 UTC m=+18.176178827" watchObservedRunningTime="2025-12-10 06:24:00.569977868 +0000 UTC m=+20.340707208"
	Dec 10 06:24:00 default-k8s-diff-port-643991 kubelet[1325]: I1210 06:24:00.650699    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7mw9\" (UniqueName: \"kubernetes.io/projected/0d90f3d0-1378-4217-b9cc-2116a1d1dbbb-kube-api-access-k7mw9\") pod \"busybox\" (UID: \"0d90f3d0-1378-4217-b9cc-2116a1d1dbbb\") " pod="default/busybox"
	
	
	==> storage-provisioner [a0eeabf08487d16870d618dc406ae5eae2cccee66e70309c775f74e80df70748] <==
	I1210 06:23:58.046846       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1210 06:23:58.058248       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1210 06:23:58.058300       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1210 06:23:58.061180       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:23:58.068664       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1210 06:23:58.068836       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1210 06:23:58.069039       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-643991_17d5c039-ef1d-4815-b86b-a9596c21fb0a!
	I1210 06:23:58.069136       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ed6e8b9e-41cf-4e31-adb7-3192df14d1bf", APIVersion:"v1", ResourceVersion:"409", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-643991_17d5c039-ef1d-4815-b86b-a9596c21fb0a became leader
	W1210 06:23:58.071503       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:23:58.076429       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1210 06:23:58.169206       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-643991_17d5c039-ef1d-4815-b86b-a9596c21fb0a!
	W1210 06:24:00.079657       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:24:00.085882       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:24:02.092520       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:24:02.101909       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:24:04.105887       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:24:04.114560       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:24:06.117717       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:24:06.122568       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:24:08.126018       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:24:08.130582       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:24:10.134488       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:24:10.138610       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-643991 -n default-k8s-diff-port-643991
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-643991 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-424086 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-424086 --alsologtostderr -v=1: exit status 80 (2.061537239s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-424086 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 06:24:49.718243  334569 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:24:49.718540  334569 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:24:49.718551  334569 out.go:374] Setting ErrFile to fd 2...
	I1210 06:24:49.718556  334569 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:24:49.718815  334569 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8832/.minikube/bin
	I1210 06:24:49.719051  334569 out.go:368] Setting JSON to false
	I1210 06:24:49.719075  334569 mustload.go:66] Loading cluster: old-k8s-version-424086
	I1210 06:24:49.719661  334569 config.go:182] Loaded profile config "old-k8s-version-424086": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1210 06:24:49.720120  334569 cli_runner.go:164] Run: docker container inspect old-k8s-version-424086 --format={{.State.Status}}
	I1210 06:24:49.746361  334569 host.go:66] Checking if "old-k8s-version-424086" exists ...
	I1210 06:24:49.746704  334569 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:24:49.812610  334569 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:85 SystemTime:2025-12-10 06:24:49.800998807 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:24:49.813243  334569 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765151505-21409/minikube-v1.37.0-1765151505-21409-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765151505-21409-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-424086 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1210 06:24:49.815655  334569 out.go:179] * Pausing node old-k8s-version-424086 ... 
	I1210 06:24:49.817137  334569 host.go:66] Checking if "old-k8s-version-424086" exists ...
	I1210 06:24:49.817430  334569 ssh_runner.go:195] Run: systemctl --version
	I1210 06:24:49.817524  334569 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-424086
	I1210 06:24:49.836745  334569 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/old-k8s-version-424086/id_rsa Username:docker}
	I1210 06:24:49.937029  334569 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:24:49.954928  334569 pause.go:52] kubelet running: true
	I1210 06:24:49.955033  334569 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1210 06:24:50.189111  334569 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1210 06:24:50.189215  334569 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1210 06:24:50.279898  334569 cri.go:89] found id: "914c4088df00c31f369bdfe0e192e6636063078e58e9ec66a664954130a9142a"
	I1210 06:24:50.279924  334569 cri.go:89] found id: "a64b25c87547a694a7859016b2ba1fcc83c7b299676d2b8c2fcf983aafc02a6a"
	I1210 06:24:50.279930  334569 cri.go:89] found id: "1a8811723167fa6947da5975aed1032d246a1439e70ddd047ab355bb354799c3"
	I1210 06:24:50.279936  334569 cri.go:89] found id: "b21b5007f34e2df91ee40c8acf976b58a08736cb563430c576aebb7a80a57bd7"
	I1210 06:24:50.279941  334569 cri.go:89] found id: "d5ff7c07b23bb6e013e976a59d08c0963394c1d3c83054f617318b04962837f7"
	I1210 06:24:50.279947  334569 cri.go:89] found id: "93a32a0fa3cab7bf6ae2839ea587c0222d752c39fb0442b5594fc8fb840385c5"
	I1210 06:24:50.279951  334569 cri.go:89] found id: "99a520617b27091388284c36bef3465458e40aa0ab841df386ee409f39ccbee2"
	I1210 06:24:50.279956  334569 cri.go:89] found id: "70b526ae1f4ce1d3bdeff2ca86e39c33688d70edf03a257a1b0eeda29e7059a9"
	I1210 06:24:50.279960  334569 cri.go:89] found id: "eca25d4da655329c0f900bc2d9a38df2f8b3abd27a1fb23973129f968c2ffbea"
	I1210 06:24:50.279974  334569 cri.go:89] found id: "2391ccb16a41baf6874b7001b4ce1302fe76bd9c37f0aa3d9209904f2376550f"
	I1210 06:24:50.279983  334569 cri.go:89] found id: "8b02c6ca7d4466db7f6c782b5cef77cc7d1b41833fc02837b2fbfa4014dcd4dc"
	I1210 06:24:50.279987  334569 cri.go:89] found id: ""
	I1210 06:24:50.280026  334569 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 06:24:50.295496  334569 retry.go:31] will retry after 351.282529ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:24:50Z" level=error msg="open /run/runc: no such file or directory"
	I1210 06:24:50.647047  334569 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:24:50.661606  334569 pause.go:52] kubelet running: false
	I1210 06:24:50.661664  334569 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1210 06:24:50.836028  334569 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1210 06:24:50.836122  334569 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1210 06:24:50.909868  334569 cri.go:89] found id: "914c4088df00c31f369bdfe0e192e6636063078e58e9ec66a664954130a9142a"
	I1210 06:24:50.909889  334569 cri.go:89] found id: "a64b25c87547a694a7859016b2ba1fcc83c7b299676d2b8c2fcf983aafc02a6a"
	I1210 06:24:50.909895  334569 cri.go:89] found id: "1a8811723167fa6947da5975aed1032d246a1439e70ddd047ab355bb354799c3"
	I1210 06:24:50.909900  334569 cri.go:89] found id: "b21b5007f34e2df91ee40c8acf976b58a08736cb563430c576aebb7a80a57bd7"
	I1210 06:24:50.909904  334569 cri.go:89] found id: "d5ff7c07b23bb6e013e976a59d08c0963394c1d3c83054f617318b04962837f7"
	I1210 06:24:50.909908  334569 cri.go:89] found id: "93a32a0fa3cab7bf6ae2839ea587c0222d752c39fb0442b5594fc8fb840385c5"
	I1210 06:24:50.909912  334569 cri.go:89] found id: "99a520617b27091388284c36bef3465458e40aa0ab841df386ee409f39ccbee2"
	I1210 06:24:50.909916  334569 cri.go:89] found id: "70b526ae1f4ce1d3bdeff2ca86e39c33688d70edf03a257a1b0eeda29e7059a9"
	I1210 06:24:50.909919  334569 cri.go:89] found id: "eca25d4da655329c0f900bc2d9a38df2f8b3abd27a1fb23973129f968c2ffbea"
	I1210 06:24:50.909935  334569 cri.go:89] found id: "2391ccb16a41baf6874b7001b4ce1302fe76bd9c37f0aa3d9209904f2376550f"
	I1210 06:24:50.909940  334569 cri.go:89] found id: "8b02c6ca7d4466db7f6c782b5cef77cc7d1b41833fc02837b2fbfa4014dcd4dc"
	I1210 06:24:50.909943  334569 cri.go:89] found id: ""
	I1210 06:24:50.909984  334569 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 06:24:50.922451  334569 retry.go:31] will retry after 521.17907ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:24:50Z" level=error msg="open /run/runc: no such file or directory"
	I1210 06:24:51.444159  334569 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:24:51.458090  334569 pause.go:52] kubelet running: false
	I1210 06:24:51.458146  334569 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1210 06:24:51.605721  334569 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1210 06:24:51.605801  334569 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1210 06:24:51.677170  334569 cri.go:89] found id: "914c4088df00c31f369bdfe0e192e6636063078e58e9ec66a664954130a9142a"
	I1210 06:24:51.677199  334569 cri.go:89] found id: "a64b25c87547a694a7859016b2ba1fcc83c7b299676d2b8c2fcf983aafc02a6a"
	I1210 06:24:51.677203  334569 cri.go:89] found id: "1a8811723167fa6947da5975aed1032d246a1439e70ddd047ab355bb354799c3"
	I1210 06:24:51.677206  334569 cri.go:89] found id: "b21b5007f34e2df91ee40c8acf976b58a08736cb563430c576aebb7a80a57bd7"
	I1210 06:24:51.677209  334569 cri.go:89] found id: "d5ff7c07b23bb6e013e976a59d08c0963394c1d3c83054f617318b04962837f7"
	I1210 06:24:51.677212  334569 cri.go:89] found id: "93a32a0fa3cab7bf6ae2839ea587c0222d752c39fb0442b5594fc8fb840385c5"
	I1210 06:24:51.677215  334569 cri.go:89] found id: "99a520617b27091388284c36bef3465458e40aa0ab841df386ee409f39ccbee2"
	I1210 06:24:51.677218  334569 cri.go:89] found id: "70b526ae1f4ce1d3bdeff2ca86e39c33688d70edf03a257a1b0eeda29e7059a9"
	I1210 06:24:51.677221  334569 cri.go:89] found id: "eca25d4da655329c0f900bc2d9a38df2f8b3abd27a1fb23973129f968c2ffbea"
	I1210 06:24:51.677226  334569 cri.go:89] found id: "2391ccb16a41baf6874b7001b4ce1302fe76bd9c37f0aa3d9209904f2376550f"
	I1210 06:24:51.677229  334569 cri.go:89] found id: "8b02c6ca7d4466db7f6c782b5cef77cc7d1b41833fc02837b2fbfa4014dcd4dc"
	I1210 06:24:51.677232  334569 cri.go:89] found id: ""
	I1210 06:24:51.677276  334569 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 06:24:51.692283  334569 out.go:203] 
	W1210 06:24:51.693962  334569 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:24:51Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:24:51Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 06:24:51.693989  334569 out.go:285] * 
	* 
	W1210 06:24:51.698139  334569 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 06:24:51.699662  334569 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p old-k8s-version-424086 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-424086
helpers_test.go:244: (dbg) docker inspect old-k8s-version-424086:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d21017d71f3af8d7abafb6e9f6402086b5bc7efdc67803532796985e567044fe",
	        "Created": "2025-12-10T06:22:41.51619025Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 321496,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T06:23:55.095716852Z",
	            "FinishedAt": "2025-12-10T06:23:54.137620257Z"
	        },
	        "Image": "sha256:9dfcc37acf4d8ed51daae49d651516447e95ced4bb0b0783e8c53cb79a74f008",
	        "ResolvConfPath": "/var/lib/docker/containers/d21017d71f3af8d7abafb6e9f6402086b5bc7efdc67803532796985e567044fe/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d21017d71f3af8d7abafb6e9f6402086b5bc7efdc67803532796985e567044fe/hostname",
	        "HostsPath": "/var/lib/docker/containers/d21017d71f3af8d7abafb6e9f6402086b5bc7efdc67803532796985e567044fe/hosts",
	        "LogPath": "/var/lib/docker/containers/d21017d71f3af8d7abafb6e9f6402086b5bc7efdc67803532796985e567044fe/d21017d71f3af8d7abafb6e9f6402086b5bc7efdc67803532796985e567044fe-json.log",
	        "Name": "/old-k8s-version-424086",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-424086:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-424086",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d21017d71f3af8d7abafb6e9f6402086b5bc7efdc67803532796985e567044fe",
	                "LowerDir": "/var/lib/docker/overlay2/6ed813bbe06aa9d52f4b2ba3e4f390060eccae3897f3c072f46a421de8d0988d-init/diff:/var/lib/docker/overlay2/5745aee6e8b05b3a4cc4ad6aee891df9d6438d830895f70bd2a764a976802708/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6ed813bbe06aa9d52f4b2ba3e4f390060eccae3897f3c072f46a421de8d0988d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6ed813bbe06aa9d52f4b2ba3e4f390060eccae3897f3c072f46a421de8d0988d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6ed813bbe06aa9d52f4b2ba3e4f390060eccae3897f3c072f46a421de8d0988d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-424086",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-424086/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-424086",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-424086",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-424086",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "dda5b6fbdb9aea8eead8049005c4e2320586fb658ed66c793fc6840d4dc8f8ad",
	            "SandboxKey": "/var/run/docker/netns/dda5b6fbdb9a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33114"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33115"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33118"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33116"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33117"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-424086": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "027b45f7486274e056d3788623afd124007b73108c46de2edc58de9683929366",
	                    "EndpointID": "9a6c1667ee8557daf9efe1776ae54f76294cca270f4d2a62c58b0c210e5a2e2f",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "36:a3:3e:7f:1f:e0",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-424086",
	                        "d21017d71f3a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-424086 -n old-k8s-version-424086
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-424086 -n old-k8s-version-424086: exit status 2 (330.193148ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-424086 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-424086 logs -n 25: (1.169053233s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-201263 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ ssh     │ -p bridge-201263 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ ssh     │ -p bridge-201263 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ ssh     │ -p bridge-201263 sudo crio config                                                                                                                                                                                                             │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ delete  │ -p bridge-201263                                                                                                                                                                                                                              │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ delete  │ -p disable-driver-mounts-998062                                                                                                                                                                                                               │ disable-driver-mounts-998062 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ start   │ -p default-k8s-diff-port-643991 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-643991 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:24 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-424086 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-424086       │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ stop    │ -p old-k8s-version-424086 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-424086       │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-424086 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-424086       │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ start   │ -p old-k8s-version-424086 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-424086       │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:24 UTC │
	│ addons  │ enable metrics-server -p no-preload-713838 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-713838            │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-133470 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-133470           │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ stop    │ -p no-preload-713838 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-713838            │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:24 UTC │
	│ stop    │ -p embed-certs-133470 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-133470           │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:24 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-643991 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-643991 │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-643991 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-643991 │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:24 UTC │
	│ addons  │ enable dashboard -p no-preload-713838 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-713838            │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:24 UTC │
	│ start   │ -p no-preload-713838 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-713838            │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-133470 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-133470           │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:24 UTC │
	│ start   │ -p embed-certs-133470 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-133470           │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-643991 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-643991 │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:24 UTC │
	│ start   │ -p default-k8s-diff-port-643991 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-643991 │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │                     │
	│ image   │ old-k8s-version-424086 image list --format=json                                                                                                                                                                                               │ old-k8s-version-424086       │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:24 UTC │
	│ pause   │ -p old-k8s-version-424086 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-424086       │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:24:29
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:24:29.463157  331193 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:24:29.463277  331193 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:24:29.463288  331193 out.go:374] Setting ErrFile to fd 2...
	I1210 06:24:29.463295  331193 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:24:29.463635  331193 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8832/.minikube/bin
	I1210 06:24:29.464064  331193 out.go:368] Setting JSON to false
	I1210 06:24:29.465328  331193 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4020,"bootTime":1765343849,"procs":352,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 06:24:29.465387  331193 start.go:143] virtualization: kvm guest
	I1210 06:24:29.467168  331193 out.go:179] * [default-k8s-diff-port-643991] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 06:24:29.472151  331193 notify.go:221] Checking for updates...
	I1210 06:24:29.472194  331193 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 06:24:29.473813  331193 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:24:29.475281  331193 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-8832/kubeconfig
	I1210 06:24:29.476775  331193 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-8832/.minikube
	I1210 06:24:29.480653  331193 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 06:24:29.482107  331193 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:24:29.527340  327833 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.611746156s)
	I1210 06:24:29.527500  327833 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.459286319s)
	I1210 06:24:29.527346  327833 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.630139588s)
	I1210 06:24:29.527606  327833 api_server.go:72] duration metric: took 2.847745576s to wait for apiserver process to appear ...
	I1210 06:24:29.527659  327833 api_server.go:88] waiting for apiserver healthz status ...
	I1210 06:24:29.527682  327833 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1210 06:24:29.529329  327833 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-133470 addons enable metrics-server
	
	I1210 06:24:29.538145  327833 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 06:24:29.538175  327833 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 06:24:29.542637  327833 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1210 06:24:29.483944  331193 config.go:182] Loaded profile config "default-k8s-diff-port-643991": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 06:24:29.484575  331193 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:24:29.511842  331193 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 06:24:29.511943  331193 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:24:29.570351  331193 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-10 06:24:29.560751805 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:24:29.570447  331193 docker.go:319] overlay module found
	I1210 06:24:29.573605  331193 out.go:179] * Using the docker driver based on existing profile
	I1210 06:24:29.575214  331193 start.go:309] selected driver: docker
	I1210 06:24:29.575232  331193 start.go:927] validating driver "docker" against &{Name:default-k8s-diff-port-643991 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-643991 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:24:29.575339  331193 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:24:29.575949  331193 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:24:29.633518  331193 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-10 06:24:29.623561147 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:24:29.633853  331193 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 06:24:29.633881  331193 cni.go:84] Creating CNI manager for ""
	I1210 06:24:29.633943  331193 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:24:29.634002  331193 start.go:353] cluster config:
	{Name:default-k8s-diff-port-643991 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-643991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:24:29.636111  331193 out.go:179] * Starting "default-k8s-diff-port-643991" primary control-plane node in "default-k8s-diff-port-643991" cluster
	I1210 06:24:29.637418  331193 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 06:24:29.638784  331193 out.go:179] * Pulling base image v0.0.48-1765319469-22089 ...
	I1210 06:24:29.639957  331193 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 06:24:29.640000  331193 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-8832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1210 06:24:29.640013  331193 cache.go:65] Caching tarball of preloaded images
	I1210 06:24:29.640039  331193 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon
	I1210 06:24:29.640097  331193 preload.go:238] Found /home/jenkins/minikube-integration/22089-8832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 06:24:29.640113  331193 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1210 06:24:29.640238  331193 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/default-k8s-diff-port-643991/config.json ...
	I1210 06:24:29.661567  331193 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon, skipping pull
	I1210 06:24:29.661588  331193 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca exists in daemon, skipping load
	I1210 06:24:29.661608  331193 cache.go:243] Successfully downloaded all kic artifacts
	I1210 06:24:29.661667  331193 start.go:360] acquireMachinesLock for default-k8s-diff-port-643991: {Name:mk370efe05d640ea21e9150c952c3b99e34124d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:24:29.661731  331193 start.go:364] duration metric: took 44.211µs to acquireMachinesLock for "default-k8s-diff-port-643991"
	I1210 06:24:29.661754  331193 start.go:96] Skipping create...Using existing machine configuration
	I1210 06:24:29.661764  331193 fix.go:54] fixHost starting: 
	I1210 06:24:29.661967  331193 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-643991 --format={{.State.Status}}
	I1210 06:24:29.681408  331193 fix.go:112] recreateIfNeeded on default-k8s-diff-port-643991: state=Stopped err=<nil>
	W1210 06:24:29.681439  331193 fix.go:138] unexpected machine state, will restart: <nil>
	W1210 06:24:26.193807  321295 pod_ready.go:104] pod "coredns-5dd5756b68-gmssk" is not "Ready", error: <nil>
	W1210 06:24:28.194574  321295 pod_ready.go:104] pod "coredns-5dd5756b68-gmssk" is not "Ready", error: <nil>
	I1210 06:24:28.402525  326955 addons.go:530] duration metric: took 2.602830079s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1210 06:24:28.892120  326955 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1210 06:24:28.899537  326955 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 06:24:28.899646  326955 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 06:24:29.391259  326955 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1210 06:24:29.397271  326955 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1210 06:24:29.398588  326955 api_server.go:141] control plane version: v1.35.0-beta.0
	I1210 06:24:29.398617  326955 api_server.go:131] duration metric: took 1.007407345s to wait for apiserver health ...
	I1210 06:24:29.398627  326955 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 06:24:29.402410  326955 system_pods.go:59] 8 kube-system pods found
	I1210 06:24:29.402442  326955 system_pods.go:61] "coredns-7d764666f9-hr4gk" [2d1d5353-6d76-4f61-9e66-12eee045a735] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 06:24:29.402450  326955 system_pods.go:61] "etcd-no-preload-713838" [38765820-1675-45e4-ac49-ebd982a5f5a1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 06:24:29.402456  326955 system_pods.go:61] "kindnet-28s4q" [55436b1b-68c3-4f73-8929-29ec9ae87ce6] Running
	I1210 06:24:29.402464  326955 system_pods.go:61] "kube-apiserver-no-preload-713838" [bfc05d05-cb3e-4380-be23-f5dd2c56ec7a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 06:24:29.402487  326955 system_pods.go:61] "kube-controller-manager-no-preload-713838" [e2fa176c-fbca-43a7-aa7d-84b29a495f53] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 06:24:29.402494  326955 system_pods.go:61] "kube-proxy-c62hk" [b48eb137-310e-4bea-a99e-bb776ad77807] Running
	I1210 06:24:29.402502  326955 system_pods.go:61] "kube-scheduler-no-preload-713838" [c61624ba-385e-49d5-8008-fa1083a22b1a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 06:24:29.402510  326955 system_pods.go:61] "storage-provisioner" [e89d4b38-da41-4612-8cf9-1440b142a9af] Running
	I1210 06:24:29.402519  326955 system_pods.go:74] duration metric: took 3.884956ms to wait for pod list to return data ...
	I1210 06:24:29.402528  326955 default_sa.go:34] waiting for default service account to be created ...
	I1210 06:24:29.405488  326955 default_sa.go:45] found service account: "default"
	I1210 06:24:29.405513  326955 default_sa.go:55] duration metric: took 2.978219ms for default service account to be created ...
	I1210 06:24:29.405524  326955 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 06:24:29.408711  326955 system_pods.go:86] 8 kube-system pods found
	I1210 06:24:29.408748  326955 system_pods.go:89] "coredns-7d764666f9-hr4gk" [2d1d5353-6d76-4f61-9e66-12eee045a735] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 06:24:29.408760  326955 system_pods.go:89] "etcd-no-preload-713838" [38765820-1675-45e4-ac49-ebd982a5f5a1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 06:24:29.408771  326955 system_pods.go:89] "kindnet-28s4q" [55436b1b-68c3-4f73-8929-29ec9ae87ce6] Running
	I1210 06:24:29.408781  326955 system_pods.go:89] "kube-apiserver-no-preload-713838" [bfc05d05-cb3e-4380-be23-f5dd2c56ec7a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 06:24:29.408795  326955 system_pods.go:89] "kube-controller-manager-no-preload-713838" [e2fa176c-fbca-43a7-aa7d-84b29a495f53] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 06:24:29.408806  326955 system_pods.go:89] "kube-proxy-c62hk" [b48eb137-310e-4bea-a99e-bb776ad77807] Running
	I1210 06:24:29.408815  326955 system_pods.go:89] "kube-scheduler-no-preload-713838" [c61624ba-385e-49d5-8008-fa1083a22b1a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 06:24:29.408826  326955 system_pods.go:89] "storage-provisioner" [e89d4b38-da41-4612-8cf9-1440b142a9af] Running
	I1210 06:24:29.408835  326955 system_pods.go:126] duration metric: took 3.303792ms to wait for k8s-apps to be running ...
	I1210 06:24:29.408843  326955 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 06:24:29.408886  326955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:24:29.427035  326955 system_svc.go:56] duration metric: took 18.181731ms WaitForService to wait for kubelet
	I1210 06:24:29.427067  326955 kubeadm.go:587] duration metric: took 3.627496643s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 06:24:29.427089  326955 node_conditions.go:102] verifying NodePressure condition ...
	I1210 06:24:29.430862  326955 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1210 06:24:29.430891  326955 node_conditions.go:123] node cpu capacity is 8
	I1210 06:24:29.430908  326955 node_conditions.go:105] duration metric: took 3.813169ms to run NodePressure ...
	I1210 06:24:29.430923  326955 start.go:242] waiting for startup goroutines ...
	I1210 06:24:29.430932  326955 start.go:247] waiting for cluster config update ...
	I1210 06:24:29.430945  326955 start.go:256] writing updated cluster config ...
	I1210 06:24:29.431251  326955 ssh_runner.go:195] Run: rm -f paused
	I1210 06:24:29.436265  326955 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 06:24:29.440395  326955 pod_ready.go:83] waiting for pod "coredns-7d764666f9-hr4gk" in "kube-system" namespace to be "Ready" or be gone ...
	W1210 06:24:31.446014  326955 pod_ready.go:104] pod "coredns-7d764666f9-hr4gk" is not "Ready", error: <nil>
	I1210 06:24:29.543898  327833 addons.go:530] duration metric: took 2.863978518s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1210 06:24:30.028267  327833 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1210 06:24:30.033614  327833 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1210 06:24:30.034748  327833 api_server.go:141] control plane version: v1.34.2
	I1210 06:24:30.034783  327833 api_server.go:131] duration metric: took 507.112225ms to wait for apiserver health ...
	I1210 06:24:30.034795  327833 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 06:24:30.038895  327833 system_pods.go:59] 8 kube-system pods found
	I1210 06:24:30.038939  327833 system_pods.go:61] "coredns-66bc5c9577-gw75x" [e735e195-23a6-4d4f-9d07-f49ed4f8e1ee] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 06:24:30.038952  327833 system_pods.go:61] "etcd-embed-certs-133470" [25d5119b-7d9d-4093-abab-f0d2a4164472] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 06:24:30.038965  327833 system_pods.go:61] "kindnet-zhm6w" [7ab9de47-d8c7-438f-892e-28d2c4fd45b8] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1210 06:24:30.038975  327833 system_pods.go:61] "kube-apiserver-embed-certs-133470" [c32afacf-80f8-4b1c-814f-80da0f251890] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 06:24:30.038987  327833 system_pods.go:61] "kube-controller-manager-embed-certs-133470" [c26fa721-528e-4575-bd96-ae3cb1e0f65e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 06:24:30.039000  327833 system_pods.go:61] "kube-proxy-fkdk9" [e4897efd-dd92-4bec-8784-0352ec933eba] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1210 06:24:30.039018  327833 system_pods.go:61] "kube-scheduler-embed-certs-133470" [f745fb12-43ce-46e0-8965-eb2683233045] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 06:24:30.039031  327833 system_pods.go:61] "storage-provisioner" [fc2a0a30-365d-40a5-9f1a-bc551e6beec4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 06:24:30.039043  327833 system_pods.go:74] duration metric: took 4.241151ms to wait for pod list to return data ...
	I1210 06:24:30.039063  327833 default_sa.go:34] waiting for default service account to be created ...
	I1210 06:24:30.041895  327833 default_sa.go:45] found service account: "default"
	I1210 06:24:30.041924  327833 default_sa.go:55] duration metric: took 2.850805ms for default service account to be created ...
	I1210 06:24:30.041937  327833 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 06:24:30.045212  327833 system_pods.go:86] 8 kube-system pods found
	I1210 06:24:30.045241  327833 system_pods.go:89] "coredns-66bc5c9577-gw75x" [e735e195-23a6-4d4f-9d07-f49ed4f8e1ee] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 06:24:30.045272  327833 system_pods.go:89] "etcd-embed-certs-133470" [25d5119b-7d9d-4093-abab-f0d2a4164472] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 06:24:30.045280  327833 system_pods.go:89] "kindnet-zhm6w" [7ab9de47-d8c7-438f-892e-28d2c4fd45b8] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1210 06:24:30.045291  327833 system_pods.go:89] "kube-apiserver-embed-certs-133470" [c32afacf-80f8-4b1c-814f-80da0f251890] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 06:24:30.045296  327833 system_pods.go:89] "kube-controller-manager-embed-certs-133470" [c26fa721-528e-4575-bd96-ae3cb1e0f65e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 06:24:30.045303  327833 system_pods.go:89] "kube-proxy-fkdk9" [e4897efd-dd92-4bec-8784-0352ec933eba] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1210 06:24:30.045311  327833 system_pods.go:89] "kube-scheduler-embed-certs-133470" [f745fb12-43ce-46e0-8965-eb2683233045] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 06:24:30.045330  327833 system_pods.go:89] "storage-provisioner" [fc2a0a30-365d-40a5-9f1a-bc551e6beec4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 06:24:30.045342  327833 system_pods.go:126] duration metric: took 3.399379ms to wait for k8s-apps to be running ...
	I1210 06:24:30.045350  327833 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 06:24:30.045387  327833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:24:30.060920  327833 system_svc.go:56] duration metric: took 15.559546ms WaitForService to wait for kubelet
	I1210 06:24:30.060953  327833 kubeadm.go:587] duration metric: took 3.381092368s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 06:24:30.060974  327833 node_conditions.go:102] verifying NodePressure condition ...
	I1210 06:24:30.064243  327833 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1210 06:24:30.064277  327833 node_conditions.go:123] node cpu capacity is 8
	I1210 06:24:30.064294  327833 node_conditions.go:105] duration metric: took 3.313458ms to run NodePressure ...
	I1210 06:24:30.064309  327833 start.go:242] waiting for startup goroutines ...
	I1210 06:24:30.064321  327833 start.go:247] waiting for cluster config update ...
	I1210 06:24:30.064331  327833 start.go:256] writing updated cluster config ...
	I1210 06:24:30.064637  327833 ssh_runner.go:195] Run: rm -f paused
	I1210 06:24:30.069283  327833 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 06:24:30.073958  327833 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gw75x" in "kube-system" namespace to be "Ready" or be gone ...
	W1210 06:24:32.079411  327833 pod_ready.go:104] pod "coredns-66bc5c9577-gw75x" is not "Ready", error: <nil>
	W1210 06:24:34.080413  327833 pod_ready.go:104] pod "coredns-66bc5c9577-gw75x" is not "Ready", error: <nil>
	I1210 06:24:29.683735  331193 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-643991" ...
	I1210 06:24:29.683810  331193 cli_runner.go:164] Run: docker start default-k8s-diff-port-643991
	I1210 06:24:29.979666  331193 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-643991 --format={{.State.Status}}
	I1210 06:24:30.003869  331193 kic.go:430] container "default-k8s-diff-port-643991" state is running.
	I1210 06:24:30.004450  331193 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-643991
	I1210 06:24:30.028650  331193 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/default-k8s-diff-port-643991/config.json ...
	I1210 06:24:30.028920  331193 machine.go:94] provisionDockerMachine start ...
	I1210 06:24:30.029007  331193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-643991
	I1210 06:24:30.052032  331193 main.go:143] libmachine: Using SSH client type: native
	I1210 06:24:30.052343  331193 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1210 06:24:30.052360  331193 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 06:24:30.053182  331193 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42978->127.0.0.1:33129: read: connection reset by peer
	I1210 06:24:33.203860  331193 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-643991
	
	I1210 06:24:33.203892  331193 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-643991"
	I1210 06:24:33.203955  331193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-643991
	I1210 06:24:33.234171  331193 main.go:143] libmachine: Using SSH client type: native
	I1210 06:24:33.234533  331193 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1210 06:24:33.234551  331193 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-643991 && echo "default-k8s-diff-port-643991" | sudo tee /etc/hostname
	I1210 06:24:33.411300  331193 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-643991
	
	I1210 06:24:33.411628  331193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-643991
	I1210 06:24:33.438798  331193 main.go:143] libmachine: Using SSH client type: native
	I1210 06:24:33.439123  331193 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1210 06:24:33.439155  331193 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-643991' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-643991/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-643991' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 06:24:33.601272  331193 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:24:33.601323  331193 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22089-8832/.minikube CaCertPath:/home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22089-8832/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22089-8832/.minikube}
	I1210 06:24:33.601369  331193 ubuntu.go:190] setting up certificates
	I1210 06:24:33.601379  331193 provision.go:84] configureAuth start
	I1210 06:24:33.601459  331193 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-643991
	I1210 06:24:33.628451  331193 provision.go:143] copyHostCerts
	I1210 06:24:33.628540  331193 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-8832/.minikube/ca.pem, removing ...
	I1210 06:24:33.628555  331193 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-8832/.minikube/ca.pem
	I1210 06:24:33.628639  331193 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22089-8832/.minikube/ca.pem (1078 bytes)
	I1210 06:24:33.628745  331193 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-8832/.minikube/cert.pem, removing ...
	I1210 06:24:33.628752  331193 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-8832/.minikube/cert.pem
	I1210 06:24:33.628788  331193 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22089-8832/.minikube/cert.pem (1123 bytes)
	I1210 06:24:33.628857  331193 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-8832/.minikube/key.pem, removing ...
	I1210 06:24:33.628863  331193 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-8832/.minikube/key.pem
	I1210 06:24:33.628899  331193 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22089-8832/.minikube/key.pem (1675 bytes)
	I1210 06:24:33.628975  331193 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22089-8832/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-643991 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-643991 localhost minikube]
	I1210 06:24:33.955160  331193 provision.go:177] copyRemoteCerts
	I1210 06:24:33.955246  331193 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 06:24:33.955303  331193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-643991
	I1210 06:24:33.981995  331193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/default-k8s-diff-port-643991/id_rsa Username:docker}
	I1210 06:24:34.096392  331193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 06:24:34.125757  331193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1210 06:24:34.152687  331193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 06:24:34.177938  331193 provision.go:87] duration metric: took 576.537673ms to configureAuth
	I1210 06:24:34.177967  331193 ubuntu.go:206] setting minikube options for container-runtime
	I1210 06:24:34.178176  331193 config.go:182] Loaded profile config "default-k8s-diff-port-643991": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 06:24:34.178297  331193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-643991
	I1210 06:24:34.205617  331193 main.go:143] libmachine: Using SSH client type: native
	I1210 06:24:34.205916  331193 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1210 06:24:34.205954  331193 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	W1210 06:24:30.194799  321295 pod_ready.go:104] pod "coredns-5dd5756b68-gmssk" is not "Ready", error: <nil>
	W1210 06:24:32.695060  321295 pod_ready.go:104] pod "coredns-5dd5756b68-gmssk" is not "Ready", error: <nil>
	W1210 06:24:34.802354  321295 pod_ready.go:104] pod "coredns-5dd5756b68-gmssk" is not "Ready", error: <nil>
	W1210 06:24:33.452082  326955 pod_ready.go:104] pod "coredns-7d764666f9-hr4gk" is not "Ready", error: <nil>
	W1210 06:24:35.968068  326955 pod_ready.go:104] pod "coredns-7d764666f9-hr4gk" is not "Ready", error: <nil>
	I1210 06:24:36.195848  321295 pod_ready.go:94] pod "coredns-5dd5756b68-gmssk" is "Ready"
	I1210 06:24:36.195877  321295 pod_ready.go:86] duration metric: took 30.508462663s for pod "coredns-5dd5756b68-gmssk" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:24:36.199954  321295 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-424086" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:24:36.205759  321295 pod_ready.go:94] pod "etcd-old-k8s-version-424086" is "Ready"
	I1210 06:24:36.205791  321295 pod_ready.go:86] duration metric: took 5.808338ms for pod "etcd-old-k8s-version-424086" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:24:36.210592  321295 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-424086" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:24:36.218767  321295 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-424086" is "Ready"
	I1210 06:24:36.218885  321295 pod_ready.go:86] duration metric: took 8.266164ms for pod "kube-apiserver-old-k8s-version-424086" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:24:36.224034  321295 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-424086" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:24:36.391647  321295 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-424086" is "Ready"
	I1210 06:24:36.391679  321295 pod_ready.go:86] duration metric: took 167.619952ms for pod "kube-controller-manager-old-k8s-version-424086" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:24:36.592384  321295 pod_ready.go:83] waiting for pod "kube-proxy-v9pgf" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:24:36.992365  321295 pod_ready.go:94] pod "kube-proxy-v9pgf" is "Ready"
	I1210 06:24:36.992397  321295 pod_ready.go:86] duration metric: took 399.981945ms for pod "kube-proxy-v9pgf" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:24:37.193251  321295 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-424086" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:24:37.591501  321295 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-424086" is "Ready"
	I1210 06:24:37.591532  321295 pod_ready.go:86] duration metric: took 398.256722ms for pod "kube-scheduler-old-k8s-version-424086" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:24:37.591548  321295 pod_ready.go:40] duration metric: took 31.910201225s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 06:24:37.662964  321295 start.go:625] kubectl: 1.34.3, cluster: 1.28.0 (minor skew: 6)
	I1210 06:24:37.703930  321295 out.go:203] 
	W1210 06:24:37.707593  321295 out.go:285] ! /usr/local/bin/kubectl is version 1.34.3, which may have incompatibilities with Kubernetes 1.28.0.
	I1210 06:24:37.712692  321295 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1210 06:24:37.732833  321295 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-424086" cluster and "default" namespace by default
	I1210 06:24:35.547354  331193 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 06:24:35.547379  331193 machine.go:97] duration metric: took 5.518440159s to provisionDockerMachine
	I1210 06:24:35.547393  331193 start.go:293] postStartSetup for "default-k8s-diff-port-643991" (driver="docker")
	I1210 06:24:35.547407  331193 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 06:24:35.547499  331193 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 06:24:35.547554  331193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-643991
	I1210 06:24:35.575350  331193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/default-k8s-diff-port-643991/id_rsa Username:docker}
	I1210 06:24:35.697945  331193 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 06:24:35.705182  331193 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 06:24:35.705215  331193 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 06:24:35.705229  331193 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-8832/.minikube/addons for local assets ...
	I1210 06:24:35.705290  331193 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-8832/.minikube/files for local assets ...
	I1210 06:24:35.705526  331193 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-8832/.minikube/files/etc/ssl/certs/123742.pem -> 123742.pem in /etc/ssl/certs
	I1210 06:24:35.705704  331193 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 06:24:35.718185  331193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/files/etc/ssl/certs/123742.pem --> /etc/ssl/certs/123742.pem (1708 bytes)
	I1210 06:24:35.745926  331193 start.go:296] duration metric: took 198.518886ms for postStartSetup
	I1210 06:24:35.746085  331193 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:24:35.746147  331193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-643991
	I1210 06:24:35.777230  331193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/default-k8s-diff-port-643991/id_rsa Username:docker}
	I1210 06:24:35.889165  331193 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 06:24:35.896174  331193 fix.go:56] duration metric: took 6.234403861s for fixHost
	I1210 06:24:35.896201  331193 start.go:83] releasing machines lock for "default-k8s-diff-port-643991", held for 6.234458178s
	I1210 06:24:35.896265  331193 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-643991
	I1210 06:24:35.923656  331193 ssh_runner.go:195] Run: cat /version.json
	I1210 06:24:35.923728  331193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-643991
	I1210 06:24:35.924007  331193 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 06:24:35.924105  331193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-643991
	I1210 06:24:35.957157  331193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/default-k8s-diff-port-643991/id_rsa Username:docker}
	I1210 06:24:35.965882  331193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/default-k8s-diff-port-643991/id_rsa Username:docker}
	I1210 06:24:36.073311  331193 ssh_runner.go:195] Run: systemctl --version
	I1210 06:24:36.164641  331193 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 06:24:36.224004  331193 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 06:24:36.231272  331193 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 06:24:36.231368  331193 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 06:24:36.243888  331193 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 06:24:36.243910  331193 start.go:496] detecting cgroup driver to use...
	I1210 06:24:36.243938  331193 detect.go:190] detected "systemd" cgroup driver on host os
	I1210 06:24:36.243972  331193 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 06:24:36.265874  331193 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 06:24:36.286972  331193 docker.go:218] disabling cri-docker service (if available) ...
	I1210 06:24:36.287031  331193 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 06:24:36.310408  331193 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 06:24:36.331188  331193 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 06:24:36.456751  331193 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 06:24:36.580985  331193 docker.go:234] disabling docker service ...
	I1210 06:24:36.581051  331193 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 06:24:36.601430  331193 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 06:24:36.619968  331193 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 06:24:36.740370  331193 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 06:24:36.855939  331193 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 06:24:36.874056  331193 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:24:36.894784  331193 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 06:24:36.894865  331193 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:24:36.907755  331193 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1210 06:24:36.907822  331193 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:24:36.921121  331193 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:24:36.936721  331193 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:24:36.953833  331193 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 06:24:36.965996  331193 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:24:36.978309  331193 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:24:36.990625  331193 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:24:37.003743  331193 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 06:24:37.014928  331193 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 06:24:37.025044  331193 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:24:37.152253  331193 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 06:24:38.178819  331193 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.026527319s)
	I1210 06:24:38.178854  331193 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 06:24:38.178926  331193 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 06:24:38.184419  331193 start.go:564] Will wait 60s for crictl version
	I1210 06:24:38.184506  331193 ssh_runner.go:195] Run: which crictl
	I1210 06:24:38.190349  331193 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 06:24:38.226300  331193 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 06:24:38.226386  331193 ssh_runner.go:195] Run: crio --version
	I1210 06:24:38.271269  331193 ssh_runner.go:195] Run: crio --version
	I1210 06:24:38.315102  331193 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	W1210 06:24:36.083821  327833 pod_ready.go:104] pod "coredns-66bc5c9577-gw75x" is not "Ready", error: <nil>
	W1210 06:24:38.084410  327833 pod_ready.go:104] pod "coredns-66bc5c9577-gw75x" is not "Ready", error: <nil>
	I1210 06:24:38.316794  331193 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-643991 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:24:38.341392  331193 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1210 06:24:38.346769  331193 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:24:38.361200  331193 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-643991 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-643991 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 06:24:38.361325  331193 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 06:24:38.361375  331193 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:24:38.403623  331193 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 06:24:38.403651  331193 crio.go:433] Images already preloaded, skipping extraction
	I1210 06:24:38.403722  331193 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:24:38.442146  331193 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 06:24:38.442169  331193 cache_images.go:86] Images are preloaded, skipping loading
	I1210 06:24:38.442178  331193 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.34.2 crio true true} ...
	I1210 06:24:38.442300  331193 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-643991 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-643991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 06:24:38.442375  331193 ssh_runner.go:195] Run: crio config
	I1210 06:24:38.505002  331193 cni.go:84] Creating CNI manager for ""
	I1210 06:24:38.505028  331193 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:24:38.505047  331193 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 06:24:38.505073  331193 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-643991 NodeName:default-k8s-diff-port-643991 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 06:24:38.505241  331193 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-643991"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 06:24:38.505319  331193 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1210 06:24:38.517508  331193 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 06:24:38.517583  331193 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 06:24:38.529600  331193 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1210 06:24:38.550178  331193 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 06:24:38.571541  331193 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1210 06:24:38.590976  331193 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1210 06:24:38.596252  331193 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:24:38.610456  331193 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:24:38.717727  331193 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:24:38.756666  331193 certs.go:69] Setting up /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/default-k8s-diff-port-643991 for IP: 192.168.76.2
	I1210 06:24:38.756688  331193 certs.go:195] generating shared ca certs ...
	I1210 06:24:38.756706  331193 certs.go:227] acquiring lock for ca certs: {Name:mkfe434cecfa5233603e8d01fb39a21abb4f8ca4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:24:38.756845  331193 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22089-8832/.minikube/ca.key
	I1210 06:24:38.756911  331193 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22089-8832/.minikube/proxy-client-ca.key
	I1210 06:24:38.756931  331193 certs.go:257] generating profile certs ...
	I1210 06:24:38.757041  331193 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/default-k8s-diff-port-643991/client.key
	I1210 06:24:38.757134  331193 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/default-k8s-diff-port-643991/apiserver.key.a53e5786
	I1210 06:24:38.757192  331193 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/default-k8s-diff-port-643991/proxy-client.key
	I1210 06:24:38.757410  331193 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/12374.pem (1338 bytes)
	W1210 06:24:38.757463  331193 certs.go:480] ignoring /home/jenkins/minikube-integration/22089-8832/.minikube/certs/12374_empty.pem, impossibly tiny 0 bytes
	I1210 06:24:38.757506  331193 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca-key.pem (1679 bytes)
	I1210 06:24:38.757546  331193 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem (1078 bytes)
	I1210 06:24:38.757580  331193 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/cert.pem (1123 bytes)
	I1210 06:24:38.758072  331193 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/key.pem (1675 bytes)
	I1210 06:24:38.758191  331193 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/files/etc/ssl/certs/123742.pem (1708 bytes)
	I1210 06:24:38.759567  331193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 06:24:38.786970  331193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 06:24:38.814783  331193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 06:24:38.841683  331193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 06:24:38.870316  331193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/default-k8s-diff-port-643991/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1210 06:24:38.895382  331193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/default-k8s-diff-port-643991/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 06:24:38.922826  331193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/default-k8s-diff-port-643991/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 06:24:38.950738  331193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/default-k8s-diff-port-643991/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 06:24:38.977189  331193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/files/etc/ssl/certs/123742.pem --> /usr/share/ca-certificates/123742.pem (1708 bytes)
	I1210 06:24:39.006990  331193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 06:24:39.035295  331193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/certs/12374.pem --> /usr/share/ca-certificates/12374.pem (1338 bytes)
	I1210 06:24:39.062103  331193 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 06:24:39.080812  331193 ssh_runner.go:195] Run: openssl version
	I1210 06:24:39.088878  331193 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:24:39.100012  331193 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 06:24:39.111019  331193 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:24:39.117095  331193 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:24:39.117163  331193 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:24:39.174768  331193 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 06:24:39.186404  331193 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12374.pem
	I1210 06:24:39.197217  331193 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12374.pem /etc/ssl/certs/12374.pem
	I1210 06:24:39.208928  331193 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12374.pem
	I1210 06:24:39.214853  331193 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:52 /usr/share/ca-certificates/12374.pem
	I1210 06:24:39.214920  331193 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12374.pem
	I1210 06:24:39.274055  331193 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 06:24:39.284866  331193 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/123742.pem
	I1210 06:24:39.295412  331193 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/123742.pem /etc/ssl/certs/123742.pem
	I1210 06:24:39.308005  331193 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/123742.pem
	I1210 06:24:39.314292  331193 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:52 /usr/share/ca-certificates/123742.pem
	I1210 06:24:39.314365  331193 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/123742.pem
	I1210 06:24:39.374427  331193 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 06:24:39.386867  331193 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:24:39.392763  331193 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 06:24:39.452683  331193 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 06:24:39.512535  331193 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 06:24:39.562115  331193 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 06:24:39.622216  331193 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 06:24:39.684511  331193 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 06:24:39.746211  331193 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-643991 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-643991 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:24:39.746308  331193 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 06:24:39.746381  331193 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:24:39.788828  331193 cri.go:89] found id: "8cb208db605620bb50e399feca07150e2a59edcd3b1bef56613bc9bf58d33577"
	I1210 06:24:39.788851  331193 cri.go:89] found id: "939f270b7e90898a7f21a52e2572b0814d28cd556fbbc16d377a84363bcff231"
	I1210 06:24:39.788856  331193 cri.go:89] found id: "9b258fc04f844289ade513f0963c9827dce6e9c67835e2e2ffc484b28ca58cb9"
	I1210 06:24:39.788861  331193 cri.go:89] found id: "e3522bb390040c1d32dccb4cfcacd9939770bc3064f9bb9dac4051ec77431f13"
	I1210 06:24:39.788866  331193 cri.go:89] found id: ""
	I1210 06:24:39.788911  331193 ssh_runner.go:195] Run: sudo runc list -f json
	W1210 06:24:39.805869  331193 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:24:39Z" level=error msg="open /run/runc: no such file or directory"
	I1210 06:24:39.805936  331193 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 06:24:39.818381  331193 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 06:24:39.818402  331193 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 06:24:39.818587  331193 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 06:24:39.829879  331193 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:24:39.831202  331193 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-643991" does not appear in /home/jenkins/minikube-integration/22089-8832/kubeconfig
	I1210 06:24:39.832361  331193 kubeconfig.go:62] /home/jenkins/minikube-integration/22089-8832/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-643991" cluster setting kubeconfig missing "default-k8s-diff-port-643991" context setting]
	I1210 06:24:39.833794  331193 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/kubeconfig: {Name:mk2d0febd8c6a30a71f02d20e2057fd6d147cd76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:24:39.836442  331193 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 06:24:39.847872  331193 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1210 06:24:39.848026  331193 kubeadm.go:602] duration metric: took 29.616854ms to restartPrimaryControlPlane
	I1210 06:24:39.848051  331193 kubeadm.go:403] duration metric: took 101.849914ms to StartCluster
	I1210 06:24:39.848095  331193 settings.go:142] acquiring lock: {Name:mkcfa52e2e09cf8266d26c2d1d1f162454a79515 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:24:39.848178  331193 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22089-8832/kubeconfig
	I1210 06:24:39.850942  331193 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/kubeconfig: {Name:mk2d0febd8c6a30a71f02d20e2057fd6d147cd76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:24:39.851362  331193 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 06:24:39.851496  331193 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 06:24:39.851611  331193 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-643991"
	I1210 06:24:39.851628  331193 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-643991"
	I1210 06:24:39.851633  331193 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-643991"
	I1210 06:24:39.851638  331193 config.go:182] Loaded profile config "default-k8s-diff-port-643991": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	W1210 06:24:39.851644  331193 addons.go:248] addon storage-provisioner should already be in state true
	I1210 06:24:39.851652  331193 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-643991"
	W1210 06:24:39.851662  331193 addons.go:248] addon dashboard should already be in state true
	I1210 06:24:39.851673  331193 host.go:66] Checking if "default-k8s-diff-port-643991" exists ...
	I1210 06:24:39.851691  331193 host.go:66] Checking if "default-k8s-diff-port-643991" exists ...
	I1210 06:24:39.851993  331193 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-643991"
	I1210 06:24:39.852013  331193 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-643991"
	I1210 06:24:39.852174  331193 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-643991 --format={{.State.Status}}
	I1210 06:24:39.852201  331193 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-643991 --format={{.State.Status}}
	I1210 06:24:39.852289  331193 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-643991 --format={{.State.Status}}
	I1210 06:24:39.855413  331193 out.go:179] * Verifying Kubernetes components...
	I1210 06:24:39.858890  331193 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:24:39.884040  331193 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1210 06:24:39.884934  331193 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-643991"
	W1210 06:24:39.884961  331193 addons.go:248] addon default-storageclass should already be in state true
	I1210 06:24:39.884990  331193 host.go:66] Checking if "default-k8s-diff-port-643991" exists ...
	I1210 06:24:39.885883  331193 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-643991 --format={{.State.Status}}
	I1210 06:24:39.887424  331193 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:24:39.888523  331193 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1210 06:24:39.889694  331193 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:24:39.889718  331193 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 06:24:39.889780  331193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-643991
	I1210 06:24:39.890593  331193 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1210 06:24:39.890617  331193 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1210 06:24:39.890675  331193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-643991
	I1210 06:24:39.926093  331193 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 06:24:39.926126  331193 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 06:24:39.926186  331193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-643991
	I1210 06:24:39.935798  331193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/default-k8s-diff-port-643991/id_rsa Username:docker}
	I1210 06:24:39.941129  331193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/default-k8s-diff-port-643991/id_rsa Username:docker}
	I1210 06:24:39.960213  331193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/default-k8s-diff-port-643991/id_rsa Username:docker}
	I1210 06:24:40.048204  331193 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:24:40.067806  331193 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:24:40.070912  331193 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-643991" to be "Ready" ...
	I1210 06:24:40.078811  331193 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1210 06:24:40.078845  331193 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1210 06:24:40.106062  331193 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:24:40.109845  331193 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1210 06:24:40.109871  331193 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1210 06:24:40.155735  331193 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1210 06:24:40.155775  331193 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1210 06:24:40.192253  331193 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1210 06:24:40.192381  331193 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1210 06:24:40.214622  331193 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1210 06:24:40.214644  331193 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1210 06:24:40.237925  331193 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1210 06:24:40.237952  331193 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1210 06:24:40.264863  331193 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1210 06:24:40.264929  331193 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1210 06:24:40.285257  331193 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1210 06:24:40.285289  331193 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1210 06:24:40.307998  331193 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 06:24:40.308032  331193 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1210 06:24:40.328972  331193 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 06:24:41.274920  331193 node_ready.go:49] node "default-k8s-diff-port-643991" is "Ready"
	I1210 06:24:41.274963  331193 node_ready.go:38] duration metric: took 1.204019067s for node "default-k8s-diff-port-643991" to be "Ready" ...
	I1210 06:24:41.274982  331193 api_server.go:52] waiting for apiserver process to appear ...
	I1210 06:24:41.275043  331193 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:24:41.839753  331193 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.771911029s)
	I1210 06:24:41.839816  331193 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.733712187s)
	I1210 06:24:41.839916  331193 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.51090615s)
	I1210 06:24:41.839980  331193 api_server.go:72] duration metric: took 1.988583505s to wait for apiserver process to appear ...
	I1210 06:24:41.840190  331193 api_server.go:88] waiting for apiserver healthz status ...
	I1210 06:24:41.840212  331193 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1210 06:24:41.842119  331193 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-643991 addons enable metrics-server
	
	I1210 06:24:41.845927  331193 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 06:24:41.845957  331193 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 06:24:41.850363  331193 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1210 06:24:37.982141  326955 pod_ready.go:104] pod "coredns-7d764666f9-hr4gk" is not "Ready", error: <nil>
	W1210 06:24:40.446587  326955 pod_ready.go:104] pod "coredns-7d764666f9-hr4gk" is not "Ready", error: <nil>
	W1210 06:24:42.447261  326955 pod_ready.go:104] pod "coredns-7d764666f9-hr4gk" is not "Ready", error: <nil>
	W1210 06:24:40.087071  327833 pod_ready.go:104] pod "coredns-66bc5c9577-gw75x" is not "Ready", error: <nil>
	W1210 06:24:42.580290  327833 pod_ready.go:104] pod "coredns-66bc5c9577-gw75x" is not "Ready", error: <nil>
	I1210 06:24:41.851971  331193 addons.go:530] duration metric: took 2.000507294s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1210 06:24:42.340651  331193 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1210 06:24:42.345283  331193 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 06:24:42.345337  331193 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 06:24:42.841003  331193 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1210 06:24:42.845358  331193 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1210 06:24:42.846546  331193 api_server.go:141] control plane version: v1.34.2
	I1210 06:24:42.846575  331193 api_server.go:131] duration metric: took 1.006376692s to wait for apiserver health ...
	I1210 06:24:42.846585  331193 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 06:24:42.850254  331193 system_pods.go:59] 8 kube-system pods found
	I1210 06:24:42.850305  331193 system_pods.go:61] "coredns-66bc5c9577-znsz6" [e151b597-32ae-4033-8ce6-fc3d9efd72b2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 06:24:42.850315  331193 system_pods.go:61] "etcd-default-k8s-diff-port-643991" [d45a67d5-7ee5-4f45-bef2-491ce1204cde] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 06:24:42.850324  331193 system_pods.go:61] "kindnet-7j6ns" [a757a831-3437-4844-a84f-3eb2b8d6dad5] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1210 06:24:42.850330  331193 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-643991" [3f4ebf3d-40e0-4a3b-bff1-90f5f486cab9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 06:24:42.850337  331193 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-643991" [6955b6b4-7da0-4c20-8ab9-899868eca432] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 06:24:42.850343  331193 system_pods.go:61] "kube-proxy-mkpzc" [f4ed478e-05fc-4161-ae59-666311f1a620] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1210 06:24:42.850355  331193 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-643991" [29f8dbc9-8a3b-45f2-b54f-df593f38ab0f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 06:24:42.850367  331193 system_pods.go:61] "storage-provisioner" [dc38e64c-cf9f-42d4-a886-014f884f425d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 06:24:42.850375  331193 system_pods.go:74] duration metric: took 3.783492ms to wait for pod list to return data ...
	I1210 06:24:42.850387  331193 default_sa.go:34] waiting for default service account to be created ...
	I1210 06:24:42.852891  331193 default_sa.go:45] found service account: "default"
	I1210 06:24:42.852913  331193 default_sa.go:55] duration metric: took 2.520509ms for default service account to be created ...
	I1210 06:24:42.852921  331193 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 06:24:42.855638  331193 system_pods.go:86] 8 kube-system pods found
	I1210 06:24:42.855665  331193 system_pods.go:89] "coredns-66bc5c9577-znsz6" [e151b597-32ae-4033-8ce6-fc3d9efd72b2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 06:24:42.855674  331193 system_pods.go:89] "etcd-default-k8s-diff-port-643991" [d45a67d5-7ee5-4f45-bef2-491ce1204cde] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 06:24:42.855682  331193 system_pods.go:89] "kindnet-7j6ns" [a757a831-3437-4844-a84f-3eb2b8d6dad5] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1210 06:24:42.855688  331193 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-643991" [3f4ebf3d-40e0-4a3b-bff1-90f5f486cab9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 06:24:42.855695  331193 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-643991" [6955b6b4-7da0-4c20-8ab9-899868eca432] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 06:24:42.855700  331193 system_pods.go:89] "kube-proxy-mkpzc" [f4ed478e-05fc-4161-ae59-666311f1a620] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1210 06:24:42.855706  331193 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-643991" [29f8dbc9-8a3b-45f2-b54f-df593f38ab0f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 06:24:42.855710  331193 system_pods.go:89] "storage-provisioner" [dc38e64c-cf9f-42d4-a886-014f884f425d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 06:24:42.855718  331193 system_pods.go:126] duration metric: took 2.791834ms to wait for k8s-apps to be running ...
	I1210 06:24:42.855727  331193 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 06:24:42.855774  331193 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:24:42.869368  331193 system_svc.go:56] duration metric: took 13.631016ms WaitForService to wait for kubelet
	I1210 06:24:42.869399  331193 kubeadm.go:587] duration metric: took 3.018002854s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 06:24:42.869427  331193 node_conditions.go:102] verifying NodePressure condition ...
	I1210 06:24:42.872414  331193 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1210 06:24:42.872446  331193 node_conditions.go:123] node cpu capacity is 8
	I1210 06:24:42.872463  331193 node_conditions.go:105] duration metric: took 3.02997ms to run NodePressure ...
	I1210 06:24:42.872490  331193 start.go:242] waiting for startup goroutines ...
	I1210 06:24:42.872498  331193 start.go:247] waiting for cluster config update ...
	I1210 06:24:42.872511  331193 start.go:256] writing updated cluster config ...
	I1210 06:24:42.872859  331193 ssh_runner.go:195] Run: rm -f paused
	I1210 06:24:42.877443  331193 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 06:24:42.881288  331193 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-znsz6" in "kube-system" namespace to be "Ready" or be gone ...
	W1210 06:24:44.947238  326955 pod_ready.go:104] pod "coredns-7d764666f9-hr4gk" is not "Ready", error: <nil>
	W1210 06:24:47.447498  326955 pod_ready.go:104] pod "coredns-7d764666f9-hr4gk" is not "Ready", error: <nil>
	W1210 06:24:45.079791  327833 pod_ready.go:104] pod "coredns-66bc5c9577-gw75x" is not "Ready", error: <nil>
	W1210 06:24:47.080378  327833 pod_ready.go:104] pod "coredns-66bc5c9577-gw75x" is not "Ready", error: <nil>
	W1210 06:24:49.080980  327833 pod_ready.go:104] pod "coredns-66bc5c9577-gw75x" is not "Ready", error: <nil>
	W1210 06:24:44.888282  331193 pod_ready.go:104] pod "coredns-66bc5c9577-znsz6" is not "Ready", error: <nil>
	W1210 06:24:47.387268  331193 pod_ready.go:104] pod "coredns-66bc5c9577-znsz6" is not "Ready", error: <nil>
	W1210 06:24:49.388359  331193 pod_ready.go:104] pod "coredns-66bc5c9577-znsz6" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 10 06:24:23 old-k8s-version-424086 crio[568]: time="2025-12-10T06:24:23.182755613Z" level=info msg="Created container 8b02c6ca7d4466db7f6c782b5cef77cc7d1b41833fc02837b2fbfa4014dcd4dc: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-gwx7s/kubernetes-dashboard" id=0fcf3f91-38c3-4fba-b3ea-0ec6a545da9c name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:24:23 old-k8s-version-424086 crio[568]: time="2025-12-10T06:24:23.183509967Z" level=info msg="Starting container: 8b02c6ca7d4466db7f6c782b5cef77cc7d1b41833fc02837b2fbfa4014dcd4dc" id=434080c6-c706-46bf-a91d-4f9d0f04a6ba name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 06:24:23 old-k8s-version-424086 crio[568]: time="2025-12-10T06:24:23.18552375Z" level=info msg="Started container" PID=1747 containerID=8b02c6ca7d4466db7f6c782b5cef77cc7d1b41833fc02837b2fbfa4014dcd4dc description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-gwx7s/kubernetes-dashboard id=434080c6-c706-46bf-a91d-4f9d0f04a6ba name=/runtime.v1.RuntimeService/StartContainer sandboxID=f5117b4231bcec78d4f64fead77dc454694eab39f9326a2c5a3d8d93aff92fe1
	Dec 10 06:24:35 old-k8s-version-424086 crio[568]: time="2025-12-10T06:24:35.616213543Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=d790219a-8569-47ce-807e-55cffe530ca4 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:24:35 old-k8s-version-424086 crio[568]: time="2025-12-10T06:24:35.619979235Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=88228445-7216-4b00-8ad4-43c1f16540e9 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:24:35 old-k8s-version-424086 crio[568]: time="2025-12-10T06:24:35.621571275Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=00411ec9-5328-46e2-9183-976d59896521 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:24:35 old-k8s-version-424086 crio[568]: time="2025-12-10T06:24:35.621721804Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:24:35 old-k8s-version-424086 crio[568]: time="2025-12-10T06:24:35.628251113Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:24:35 old-k8s-version-424086 crio[568]: time="2025-12-10T06:24:35.628550349Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/5c0f1429f8c9a8e68469e545ceb533d1165d796ac2ba33ee877f486d521b18f5/merged/etc/passwd: no such file or directory"
	Dec 10 06:24:35 old-k8s-version-424086 crio[568]: time="2025-12-10T06:24:35.628589304Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/5c0f1429f8c9a8e68469e545ceb533d1165d796ac2ba33ee877f486d521b18f5/merged/etc/group: no such file or directory"
	Dec 10 06:24:35 old-k8s-version-424086 crio[568]: time="2025-12-10T06:24:35.628906181Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:24:35 old-k8s-version-424086 crio[568]: time="2025-12-10T06:24:35.678099802Z" level=info msg="Created container 914c4088df00c31f369bdfe0e192e6636063078e58e9ec66a664954130a9142a: kube-system/storage-provisioner/storage-provisioner" id=00411ec9-5328-46e2-9183-976d59896521 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:24:35 old-k8s-version-424086 crio[568]: time="2025-12-10T06:24:35.67886329Z" level=info msg="Starting container: 914c4088df00c31f369bdfe0e192e6636063078e58e9ec66a664954130a9142a" id=c44b0fc3-26fa-4dee-8c98-e33f7c873cdc name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 06:24:35 old-k8s-version-424086 crio[568]: time="2025-12-10T06:24:35.681644208Z" level=info msg="Started container" PID=1771 containerID=914c4088df00c31f369bdfe0e192e6636063078e58e9ec66a664954130a9142a description=kube-system/storage-provisioner/storage-provisioner id=c44b0fc3-26fa-4dee-8c98-e33f7c873cdc name=/runtime.v1.RuntimeService/StartContainer sandboxID=9154e018aebbee1a98b28c2d4ff34f10668dd884489e2ccc2f268e49c3c69387
	Dec 10 06:24:38 old-k8s-version-424086 crio[568]: time="2025-12-10T06:24:38.492829357Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=b6e92ee6-692f-44ac-beaa-57ced53db679 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:24:38 old-k8s-version-424086 crio[568]: time="2025-12-10T06:24:38.494376182Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=eed97e98-96b4-4c51-917a-f8853c88465d name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:24:38 old-k8s-version-424086 crio[568]: time="2025-12-10T06:24:38.497277771Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-z4ftf/dashboard-metrics-scraper" id=767a8ed9-de6c-4884-b7f9-52b6cecac8fc name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:24:38 old-k8s-version-424086 crio[568]: time="2025-12-10T06:24:38.497877907Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:24:38 old-k8s-version-424086 crio[568]: time="2025-12-10T06:24:38.504695254Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:24:38 old-k8s-version-424086 crio[568]: time="2025-12-10T06:24:38.505374328Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:24:38 old-k8s-version-424086 crio[568]: time="2025-12-10T06:24:38.54050123Z" level=info msg="Created container 2391ccb16a41baf6874b7001b4ce1302fe76bd9c37f0aa3d9209904f2376550f: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-z4ftf/dashboard-metrics-scraper" id=767a8ed9-de6c-4884-b7f9-52b6cecac8fc name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:24:38 old-k8s-version-424086 crio[568]: time="2025-12-10T06:24:38.541235164Z" level=info msg="Starting container: 2391ccb16a41baf6874b7001b4ce1302fe76bd9c37f0aa3d9209904f2376550f" id=f1eed39d-00c3-416b-a124-eb81c0d34372 name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 06:24:38 old-k8s-version-424086 crio[568]: time="2025-12-10T06:24:38.543641777Z" level=info msg="Started container" PID=1804 containerID=2391ccb16a41baf6874b7001b4ce1302fe76bd9c37f0aa3d9209904f2376550f description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-z4ftf/dashboard-metrics-scraper id=f1eed39d-00c3-416b-a124-eb81c0d34372 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7d3a58180bdddaa696fb089e4fe37cd68a58d1a7691e1842d9834113b65e8f6e
	Dec 10 06:24:38 old-k8s-version-424086 crio[568]: time="2025-12-10T06:24:38.631086935Z" level=info msg="Removing container: 7e608e8cdebfc1bbc8881003e7f9c27666867615796eb6fb4156af64286395ac" id=2dcaa6b1-85d8-42b8-8e7a-95ecf3d14fef name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 10 06:24:38 old-k8s-version-424086 crio[568]: time="2025-12-10T06:24:38.646860493Z" level=info msg="Removed container 7e608e8cdebfc1bbc8881003e7f9c27666867615796eb6fb4156af64286395ac: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-z4ftf/dashboard-metrics-scraper" id=2dcaa6b1-85d8-42b8-8e7a-95ecf3d14fef name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	2391ccb16a41b       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           14 seconds ago      Exited              dashboard-metrics-scraper   2                   7d3a58180bddd       dashboard-metrics-scraper-5f989dc9cf-z4ftf       kubernetes-dashboard
	914c4088df00c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           17 seconds ago      Running             storage-provisioner         1                   9154e018aebbe       storage-provisioner                              kube-system
	8b02c6ca7d446       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   29 seconds ago      Running             kubernetes-dashboard        0                   f5117b4231bce       kubernetes-dashboard-8694d4445c-gwx7s            kubernetes-dashboard
	35bfd69509044       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           47 seconds ago      Running             busybox                     1                   8c4f119a7ee59       busybox                                          default
	a64b25c87547a       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           47 seconds ago      Running             coredns                     0                   f55d620ff6ebf       coredns-5dd5756b68-gmssk                         kube-system
	1a8811723167f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           47 seconds ago      Exited              storage-provisioner         0                   9154e018aebbe       storage-provisioner                              kube-system
	b21b5007f34e2       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           47 seconds ago      Running             kube-proxy                  0                   2b0615e879c66       kube-proxy-v9pgf                                 kube-system
	d5ff7c07b23bb       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           47 seconds ago      Running             kindnet-cni                 0                   e39750f1f0152       kindnet-2qg8n                                    kube-system
	93a32a0fa3cab       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           50 seconds ago      Running             kube-apiserver              0                   a7c3f63289700       kube-apiserver-old-k8s-version-424086            kube-system
	99a520617b270       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           50 seconds ago      Running             kube-scheduler              0                   eb99bbb6cc2cf       kube-scheduler-old-k8s-version-424086            kube-system
	70b526ae1f4ce       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           50 seconds ago      Running             etcd                        0                   921d5c4220b08       etcd-old-k8s-version-424086                      kube-system
	eca25d4da6553       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           50 seconds ago      Running             kube-controller-manager     0                   e1b5ec6584f8c       kube-controller-manager-old-k8s-version-424086   kube-system
	
	
	==> coredns [a64b25c87547a694a7859016b2ba1fcc83c7b299676d2b8c2fcf983aafc02a6a] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:47268 - 45549 "HINFO IN 8003849988303824958.2538073435513461362. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.054953365s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-424086
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-424086
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=edc6abd3c0573b88c7a02dc35aa0b985627fa3e9
	                    minikube.k8s.io/name=old-k8s-version-424086
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T06_22_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 06:22:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-424086
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 06:24:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 06:24:34 +0000   Wed, 10 Dec 2025 06:22:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 06:24:34 +0000   Wed, 10 Dec 2025 06:22:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 06:24:34 +0000   Wed, 10 Dec 2025 06:22:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Dec 2025 06:24:34 +0000   Wed, 10 Dec 2025 06:23:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-424086
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 0992b7e47f4f804d2f02c3066938a460
	  System UUID:                e81e4360-349b-45ab-b112-f9ed8c9c5eab
	  Boot ID:                    cce7104c-1270-4b6b-af66-b04ce0de633c
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 coredns-5dd5756b68-gmssk                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     101s
	  kube-system                 etcd-old-k8s-version-424086                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         114s
	  kube-system                 kindnet-2qg8n                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      101s
	  kube-system                 kube-apiserver-old-k8s-version-424086             250m (3%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-controller-manager-old-k8s-version-424086    200m (2%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-proxy-v9pgf                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 kube-scheduler-old-k8s-version-424086             100m (1%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         100s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-z4ftf        0 (0%)        0 (0%)      0 (0%)           0 (0%)         36s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-gwx7s             0 (0%)        0 (0%)      0 (0%)           0 (0%)         36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 100s               kube-proxy       
	  Normal  Starting                 47s                kube-proxy       
	  Normal  Starting                 2m                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m (x8 over 2m)    kubelet          Node old-k8s-version-424086 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m (x8 over 2m)    kubelet          Node old-k8s-version-424086 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m (x8 over 2m)    kubelet          Node old-k8s-version-424086 status is now: NodeHasSufficientPID
	  Normal  Starting                 115s               kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    114s               kubelet          Node old-k8s-version-424086 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     114s               kubelet          Node old-k8s-version-424086 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  114s               kubelet          Node old-k8s-version-424086 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           102s               node-controller  Node old-k8s-version-424086 event: Registered Node old-k8s-version-424086 in Controller
	  Normal  NodeReady                88s                kubelet          Node old-k8s-version-424086 status is now: NodeReady
	  Normal  Starting                 51s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  51s (x8 over 51s)  kubelet          Node old-k8s-version-424086 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    51s (x8 over 51s)  kubelet          Node old-k8s-version-424086 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     51s (x8 over 51s)  kubelet          Node old-k8s-version-424086 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           36s                node-controller  Node old-k8s-version-424086 event: Registered Node old-k8s-version-424086 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 67 21 4d 5d 12 08 06
	[Dec10 06:21] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 7e b1 cc cb 4a c1 08 06
	[  +0.000451] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f6 67 21 4d 5d 12 08 06
	[ +47.984386] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 4e cf 84 e2 64 08 06
	[  +1.136322] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000001] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0e cf a5 c8 c4 7c 08 06
	[  +0.000001] ll header: 00000000: ff ff ff ff ff ff 4e 6d f9 62 31 99 08 06
	[Dec10 06:22] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 16 a6 95 b8 c2 8a 08 06
	[ +10.598490] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 42 35 90 e5 6e e9 08 06
	[  +0.000401] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 36 4e cf 84 e2 64 08 06
	[ +28.872835] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a 53 b5 51 38 03 08 06
	[  +0.000413] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4e 6d f9 62 31 99 08 06
	[  +9.820727] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6e c5 0b 85 ba 10 08 06
	[  +0.000485] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 16 a6 95 b8 c2 8a 08 06
	
	
	==> etcd [70b526ae1f4ce1d3bdeff2ca86e39c33688d70edf03a257a1b0eeda29e7059a9] <==
	{"level":"info","ts":"2025-12-10T06:24:02.10327Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-12-10T06:24:02.109079Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-10T06:24:02.109345Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-10T06:24:02.109392Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-10T06:24:02.109509Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-10T06:24:02.109526Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-10T06:24:03.095231Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-10T06:24:03.095277Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-10T06:24:03.095323Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-10T06:24:03.095337Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-12-10T06:24:03.095342Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-10T06:24:03.095351Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-12-10T06:24:03.095359Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-10T06:24:03.098176Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-10T06:24:03.098197Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-424086 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-10T06:24:03.098202Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-10T06:24:03.098509Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-10T06:24:03.098533Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-10T06:24:03.099568Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-12-10T06:24:03.099569Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-10T06:24:34.798057Z","caller":"traceutil/trace.go:171","msg":"trace[446660794] linearizableReadLoop","detail":"{readStateIndex:662; appliedIndex:661; }","duration":"107.846212ms","start":"2025-12-10T06:24:34.690187Z","end":"2025-12-10T06:24:34.798033Z","steps":["trace[446660794] 'read index received'  (duration: 107.654594ms)","trace[446660794] 'applied index is now lower than readState.Index'  (duration: 190.919µs)"],"step_count":2}
	{"level":"info","ts":"2025-12-10T06:24:34.798182Z","caller":"traceutil/trace.go:171","msg":"trace[1125219541] transaction","detail":"{read_only:false; response_revision:633; number_of_response:1; }","duration":"159.709928ms","start":"2025-12-10T06:24:34.638435Z","end":"2025-12-10T06:24:34.798145Z","steps":["trace[1125219541] 'process raft request'  (duration: 159.437493ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-10T06:24:34.798216Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.02688ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-5dd5756b68-gmssk\" ","response":"range_response_count:1 size:4991"}
	{"level":"info","ts":"2025-12-10T06:24:34.79827Z","caller":"traceutil/trace.go:171","msg":"trace[1538081865] range","detail":"{range_begin:/registry/pods/kube-system/coredns-5dd5756b68-gmssk; range_end:; response_count:1; response_revision:633; }","duration":"108.103441ms","start":"2025-12-10T06:24:34.690157Z","end":"2025-12-10T06:24:34.79826Z","steps":["trace[1538081865] 'agreement among raft nodes before linearized reading'  (duration: 107.969386ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T06:24:34.977325Z","caller":"traceutil/trace.go:171","msg":"trace[440854227] transaction","detail":"{read_only:false; response_revision:634; number_of_response:1; }","duration":"153.68203ms","start":"2025-12-10T06:24:34.823614Z","end":"2025-12-10T06:24:34.977296Z","steps":["trace[440854227] 'process raft request'  (duration: 84.563996ms)","trace[440854227] 'compare'  (duration: 68.946995ms)"],"step_count":2}
	
	
	==> kernel <==
	 06:24:52 up  1:07,  0 user,  load average: 5.28, 4.93, 3.03
	Linux old-k8s-version-424086 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d5ff7c07b23bb6e013e976a59d08c0963394c1d3c83054f617318b04962837f7] <==
	I1210 06:24:05.023685       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1210 06:24:05.023950       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1210 06:24:05.024120       1 main.go:148] setting mtu 1500 for CNI 
	I1210 06:24:05.024144       1 main.go:178] kindnetd IP family: "ipv4"
	I1210 06:24:05.024178       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-10T06:24:05Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1210 06:24:05.322836       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1210 06:24:05.322948       1 controller.go:381] "Waiting for informer caches to sync"
	I1210 06:24:05.322974       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1210 06:24:05.420078       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1210 06:24:05.723093       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1210 06:24:05.723125       1 metrics.go:72] Registering metrics
	I1210 06:24:05.723177       1 controller.go:711] "Syncing nftables rules"
	I1210 06:24:15.323347       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1210 06:24:15.323417       1 main.go:301] handling current node
	I1210 06:24:25.323590       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1210 06:24:25.323663       1 main.go:301] handling current node
	I1210 06:24:35.323245       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1210 06:24:35.323313       1 main.go:301] handling current node
	I1210 06:24:45.328539       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1210 06:24:45.328586       1 main.go:301] handling current node
	
	
	==> kube-apiserver [93a32a0fa3cab7bf6ae2839ea587c0222d752c39fb0442b5594fc8fb840385c5] <==
	I1210 06:24:04.135392       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1210 06:24:04.135436       1 aggregator.go:166] initial CRD sync complete...
	I1210 06:24:04.135443       1 autoregister_controller.go:141] Starting autoregister controller
	I1210 06:24:04.135450       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1210 06:24:04.135456       1 cache.go:39] Caches are synced for autoregister controller
	I1210 06:24:04.135508       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1210 06:24:04.135536       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1210 06:24:04.135653       1 shared_informer.go:318] Caches are synced for configmaps
	I1210 06:24:04.136168       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1210 06:24:04.136186       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1210 06:24:04.136200       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	E1210 06:24:04.143529       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1210 06:24:04.182434       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1210 06:24:04.994689       1 controller.go:624] quota admission added evaluator for: namespaces
	I1210 06:24:05.034920       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1210 06:24:05.040847       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1210 06:24:05.057662       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1210 06:24:05.067613       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1210 06:24:05.078434       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1210 06:24:05.121601       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.48.223"}
	I1210 06:24:05.136609       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.149.231"}
	I1210 06:24:16.463205       1 controller.go:624] quota admission added evaluator for: endpoints
	I1210 06:24:16.616098       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1210 06:24:16.664830       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1210 06:24:16.664832       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [eca25d4da655329c0f900bc2d9a38df2f8b3abd27a1fb23973129f968c2ffbea] <==
	I1210 06:24:16.535436       1 shared_informer.go:318] Caches are synced for resource quota
	I1210 06:24:16.668577       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1210 06:24:16.669042       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1210 06:24:16.678288       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-z4ftf"
	I1210 06:24:16.678317       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-gwx7s"
	I1210 06:24:16.687502       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="19.323678ms"
	I1210 06:24:16.688427       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="19.814699ms"
	I1210 06:24:16.698719       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="11.092791ms"
	I1210 06:24:16.698952       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="10.474613ms"
	I1210 06:24:16.699212       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="178.556µs"
	I1210 06:24:16.711910       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="13.094141ms"
	I1210 06:24:16.712157       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="102.266µs"
	I1210 06:24:16.719501       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="95.035µs"
	I1210 06:24:16.854766       1 shared_informer.go:318] Caches are synced for garbage collector
	I1210 06:24:16.906985       1 shared_informer.go:318] Caches are synced for garbage collector
	I1210 06:24:16.907019       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1210 06:24:19.572818       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="70.78µs"
	I1210 06:24:20.577285       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="71.339µs"
	I1210 06:24:21.579250       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="90.719µs"
	I1210 06:24:23.593708       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="7.187336ms"
	I1210 06:24:23.593830       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="74.605µs"
	I1210 06:24:35.970136       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="17.821046ms"
	I1210 06:24:35.970346       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="76.542µs"
	I1210 06:24:38.648531       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="105.06µs"
	I1210 06:24:47.005831       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="69.965µs"
	
	
	==> kube-proxy [b21b5007f34e2df91ee40c8acf976b58a08736cb563430c576aebb7a80a57bd7] <==
	I1210 06:24:04.908319       1 server_others.go:69] "Using iptables proxy"
	I1210 06:24:04.919148       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1210 06:24:04.939909       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1210 06:24:04.942449       1 server_others.go:152] "Using iptables Proxier"
	I1210 06:24:04.942504       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1210 06:24:04.942515       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1210 06:24:04.942553       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1210 06:24:04.942800       1 server.go:846] "Version info" version="v1.28.0"
	I1210 06:24:04.942813       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 06:24:04.943411       1 config.go:188] "Starting service config controller"
	I1210 06:24:04.943441       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1210 06:24:04.943463       1 config.go:315] "Starting node config controller"
	I1210 06:24:04.943479       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1210 06:24:04.943561       1 config.go:97] "Starting endpoint slice config controller"
	I1210 06:24:04.943587       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1210 06:24:05.043668       1 shared_informer.go:318] Caches are synced for node config
	I1210 06:24:05.043679       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1210 06:24:05.043726       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [99a520617b27091388284c36bef3465458e40aa0ab841df386ee409f39ccbee2] <==
	I1210 06:24:02.491764       1 serving.go:348] Generated self-signed cert in-memory
	W1210 06:24:04.087740       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1210 06:24:04.087902       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1210 06:24:04.087969       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1210 06:24:04.088001       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1210 06:24:04.106402       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1210 06:24:04.106436       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 06:24:04.108121       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 06:24:04.108159       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1210 06:24:04.109076       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1210 06:24:04.109329       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1210 06:24:04.209249       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 10 06:24:16 old-k8s-version-424086 kubelet[735]: I1210 06:24:16.765521     735 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbz78\" (UniqueName: \"kubernetes.io/projected/8f128d15-7745-4c29-bb40-04e58c18e98c-kube-api-access-rbz78\") pod \"dashboard-metrics-scraper-5f989dc9cf-z4ftf\" (UID: \"8f128d15-7745-4c29-bb40-04e58c18e98c\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-z4ftf"
	Dec 10 06:24:16 old-k8s-version-424086 kubelet[735]: I1210 06:24:16.765600     735 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/8f128d15-7745-4c29-bb40-04e58c18e98c-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-z4ftf\" (UID: \"8f128d15-7745-4c29-bb40-04e58c18e98c\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-z4ftf"
	Dec 10 06:24:16 old-k8s-version-424086 kubelet[735]: I1210 06:24:16.765627     735 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m994j\" (UniqueName: \"kubernetes.io/projected/3e9c8ba6-46d4-4305-9a87-ffc54ec95c34-kube-api-access-m994j\") pod \"kubernetes-dashboard-8694d4445c-gwx7s\" (UID: \"3e9c8ba6-46d4-4305-9a87-ffc54ec95c34\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-gwx7s"
	Dec 10 06:24:16 old-k8s-version-424086 kubelet[735]: I1210 06:24:16.765650     735 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/3e9c8ba6-46d4-4305-9a87-ffc54ec95c34-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-gwx7s\" (UID: \"3e9c8ba6-46d4-4305-9a87-ffc54ec95c34\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-gwx7s"
	Dec 10 06:24:19 old-k8s-version-424086 kubelet[735]: I1210 06:24:19.560197     735 scope.go:117] "RemoveContainer" containerID="9dc3ed6a9254e3a9c83fc19a222564294ae546fcf8722d731a8a1b16ab52311a"
	Dec 10 06:24:20 old-k8s-version-424086 kubelet[735]: I1210 06:24:20.564716     735 scope.go:117] "RemoveContainer" containerID="9dc3ed6a9254e3a9c83fc19a222564294ae546fcf8722d731a8a1b16ab52311a"
	Dec 10 06:24:20 old-k8s-version-424086 kubelet[735]: I1210 06:24:20.564994     735 scope.go:117] "RemoveContainer" containerID="7e608e8cdebfc1bbc8881003e7f9c27666867615796eb6fb4156af64286395ac"
	Dec 10 06:24:20 old-k8s-version-424086 kubelet[735]: E1210 06:24:20.565359     735 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-z4ftf_kubernetes-dashboard(8f128d15-7745-4c29-bb40-04e58c18e98c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-z4ftf" podUID="8f128d15-7745-4c29-bb40-04e58c18e98c"
	Dec 10 06:24:21 old-k8s-version-424086 kubelet[735]: I1210 06:24:21.568491     735 scope.go:117] "RemoveContainer" containerID="7e608e8cdebfc1bbc8881003e7f9c27666867615796eb6fb4156af64286395ac"
	Dec 10 06:24:21 old-k8s-version-424086 kubelet[735]: E1210 06:24:21.568878     735 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-z4ftf_kubernetes-dashboard(8f128d15-7745-4c29-bb40-04e58c18e98c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-z4ftf" podUID="8f128d15-7745-4c29-bb40-04e58c18e98c"
	Dec 10 06:24:23 old-k8s-version-424086 kubelet[735]: I1210 06:24:23.586792     735 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-gwx7s" podStartSLOduration=1.465063555 podCreationTimestamp="2025-12-10 06:24:16 +0000 UTC" firstStartedPulling="2025-12-10 06:24:17.017570258 +0000 UTC m=+15.618253228" lastFinishedPulling="2025-12-10 06:24:23.139223685 +0000 UTC m=+21.739906673" observedRunningTime="2025-12-10 06:24:23.586299045 +0000 UTC m=+22.186982030" watchObservedRunningTime="2025-12-10 06:24:23.586717 +0000 UTC m=+22.187399984"
	Dec 10 06:24:26 old-k8s-version-424086 kubelet[735]: I1210 06:24:26.992282     735 scope.go:117] "RemoveContainer" containerID="7e608e8cdebfc1bbc8881003e7f9c27666867615796eb6fb4156af64286395ac"
	Dec 10 06:24:26 old-k8s-version-424086 kubelet[735]: E1210 06:24:26.992773     735 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-z4ftf_kubernetes-dashboard(8f128d15-7745-4c29-bb40-04e58c18e98c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-z4ftf" podUID="8f128d15-7745-4c29-bb40-04e58c18e98c"
	Dec 10 06:24:35 old-k8s-version-424086 kubelet[735]: I1210 06:24:35.615410     735 scope.go:117] "RemoveContainer" containerID="1a8811723167fa6947da5975aed1032d246a1439e70ddd047ab355bb354799c3"
	Dec 10 06:24:38 old-k8s-version-424086 kubelet[735]: I1210 06:24:38.492016     735 scope.go:117] "RemoveContainer" containerID="7e608e8cdebfc1bbc8881003e7f9c27666867615796eb6fb4156af64286395ac"
	Dec 10 06:24:38 old-k8s-version-424086 kubelet[735]: I1210 06:24:38.629814     735 scope.go:117] "RemoveContainer" containerID="7e608e8cdebfc1bbc8881003e7f9c27666867615796eb6fb4156af64286395ac"
	Dec 10 06:24:38 old-k8s-version-424086 kubelet[735]: I1210 06:24:38.630067     735 scope.go:117] "RemoveContainer" containerID="2391ccb16a41baf6874b7001b4ce1302fe76bd9c37f0aa3d9209904f2376550f"
	Dec 10 06:24:38 old-k8s-version-424086 kubelet[735]: E1210 06:24:38.630454     735 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-z4ftf_kubernetes-dashboard(8f128d15-7745-4c29-bb40-04e58c18e98c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-z4ftf" podUID="8f128d15-7745-4c29-bb40-04e58c18e98c"
	Dec 10 06:24:46 old-k8s-version-424086 kubelet[735]: I1210 06:24:46.992919     735 scope.go:117] "RemoveContainer" containerID="2391ccb16a41baf6874b7001b4ce1302fe76bd9c37f0aa3d9209904f2376550f"
	Dec 10 06:24:46 old-k8s-version-424086 kubelet[735]: E1210 06:24:46.993362     735 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-z4ftf_kubernetes-dashboard(8f128d15-7745-4c29-bb40-04e58c18e98c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-z4ftf" podUID="8f128d15-7745-4c29-bb40-04e58c18e98c"
	Dec 10 06:24:50 old-k8s-version-424086 kubelet[735]: I1210 06:24:50.165083     735 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 10 06:24:50 old-k8s-version-424086 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 10 06:24:50 old-k8s-version-424086 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 10 06:24:50 old-k8s-version-424086 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:24:50 old-k8s-version-424086 systemd[1]: kubelet.service: Consumed 1.561s CPU time.
	
	
	==> kubernetes-dashboard [8b02c6ca7d4466db7f6c782b5cef77cc7d1b41833fc02837b2fbfa4014dcd4dc] <==
	2025/12/10 06:24:23 Starting overwatch
	2025/12/10 06:24:23 Using namespace: kubernetes-dashboard
	2025/12/10 06:24:23 Using in-cluster config to connect to apiserver
	2025/12/10 06:24:23 Using secret token for csrf signing
	2025/12/10 06:24:23 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/10 06:24:23 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/10 06:24:23 Successful initial request to the apiserver, version: v1.28.0
	2025/12/10 06:24:23 Generating JWE encryption key
	2025/12/10 06:24:23 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/10 06:24:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/10 06:24:23 Initializing JWE encryption key from synchronized object
	2025/12/10 06:24:23 Creating in-cluster Sidecar client
	2025/12/10 06:24:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/10 06:24:23 Serving insecurely on HTTP port: 9090
	
	
	==> storage-provisioner [1a8811723167fa6947da5975aed1032d246a1439e70ddd047ab355bb354799c3] <==
	I1210 06:24:04.872822       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1210 06:24:34.875636       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [914c4088df00c31f369bdfe0e192e6636063078e58e9ec66a664954130a9142a] <==
	I1210 06:24:35.704910       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1210 06:24:35.722237       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1210 06:24:35.722385       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1210 06:24:53.125322       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1210 06:24:53.125385       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4180df8f-51ab-47df-91f5-dd51db49c438", APIVersion:"v1", ResourceVersion:"656", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-424086_ba7bab8b-2a55-4bc7-8ced-f67781aaf0f4 became leader
	I1210 06:24:53.125488       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-424086_ba7bab8b-2a55-4bc7-8ced-f67781aaf0f4!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-424086 -n old-k8s-version-424086
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-424086 -n old-k8s-version-424086: exit status 2 (333.435896ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-424086 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-424086
helpers_test.go:244: (dbg) docker inspect old-k8s-version-424086:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d21017d71f3af8d7abafb6e9f6402086b5bc7efdc67803532796985e567044fe",
	        "Created": "2025-12-10T06:22:41.51619025Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 321496,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T06:23:55.095716852Z",
	            "FinishedAt": "2025-12-10T06:23:54.137620257Z"
	        },
	        "Image": "sha256:9dfcc37acf4d8ed51daae49d651516447e95ced4bb0b0783e8c53cb79a74f008",
	        "ResolvConfPath": "/var/lib/docker/containers/d21017d71f3af8d7abafb6e9f6402086b5bc7efdc67803532796985e567044fe/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d21017d71f3af8d7abafb6e9f6402086b5bc7efdc67803532796985e567044fe/hostname",
	        "HostsPath": "/var/lib/docker/containers/d21017d71f3af8d7abafb6e9f6402086b5bc7efdc67803532796985e567044fe/hosts",
	        "LogPath": "/var/lib/docker/containers/d21017d71f3af8d7abafb6e9f6402086b5bc7efdc67803532796985e567044fe/d21017d71f3af8d7abafb6e9f6402086b5bc7efdc67803532796985e567044fe-json.log",
	        "Name": "/old-k8s-version-424086",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-424086:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-424086",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d21017d71f3af8d7abafb6e9f6402086b5bc7efdc67803532796985e567044fe",
	                "LowerDir": "/var/lib/docker/overlay2/6ed813bbe06aa9d52f4b2ba3e4f390060eccae3897f3c072f46a421de8d0988d-init/diff:/var/lib/docker/overlay2/5745aee6e8b05b3a4cc4ad6aee891df9d6438d830895f70bd2a764a976802708/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6ed813bbe06aa9d52f4b2ba3e4f390060eccae3897f3c072f46a421de8d0988d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6ed813bbe06aa9d52f4b2ba3e4f390060eccae3897f3c072f46a421de8d0988d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6ed813bbe06aa9d52f4b2ba3e4f390060eccae3897f3c072f46a421de8d0988d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-424086",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-424086/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-424086",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-424086",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-424086",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "dda5b6fbdb9aea8eead8049005c4e2320586fb658ed66c793fc6840d4dc8f8ad",
	            "SandboxKey": "/var/run/docker/netns/dda5b6fbdb9a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33114"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33115"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33118"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33116"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33117"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-424086": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "027b45f7486274e056d3788623afd124007b73108c46de2edc58de9683929366",
	                    "EndpointID": "9a6c1667ee8557daf9efe1776ae54f76294cca270f4d2a62c58b0c210e5a2e2f",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "36:a3:3e:7f:1f:e0",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-424086",
	                        "d21017d71f3a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-424086 -n old-k8s-version-424086
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-424086 -n old-k8s-version-424086: exit status 2 (331.566994ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-424086 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-424086 logs -n 25: (1.346439554s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-201263 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ ssh     │ -p bridge-201263 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ ssh     │ -p bridge-201263 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ ssh     │ -p bridge-201263 sudo crio config                                                                                                                                                                                                             │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ delete  │ -p bridge-201263                                                                                                                                                                                                                              │ bridge-201263                │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ delete  │ -p disable-driver-mounts-998062                                                                                                                                                                                                               │ disable-driver-mounts-998062 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ start   │ -p default-k8s-diff-port-643991 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-643991 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:24 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-424086 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-424086       │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ stop    │ -p old-k8s-version-424086 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-424086       │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-424086 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-424086       │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ start   │ -p old-k8s-version-424086 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-424086       │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:24 UTC │
	│ addons  │ enable metrics-server -p no-preload-713838 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-713838            │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-133470 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-133470           │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ stop    │ -p no-preload-713838 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-713838            │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:24 UTC │
	│ stop    │ -p embed-certs-133470 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-133470           │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:24 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-643991 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-643991 │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-643991 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-643991 │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:24 UTC │
	│ addons  │ enable dashboard -p no-preload-713838 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-713838            │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:24 UTC │
	│ start   │ -p no-preload-713838 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-713838            │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-133470 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-133470           │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:24 UTC │
	│ start   │ -p embed-certs-133470 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-133470           │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-643991 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-643991 │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:24 UTC │
	│ start   │ -p default-k8s-diff-port-643991 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-643991 │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │                     │
	│ image   │ old-k8s-version-424086 image list --format=json                                                                                                                                                                                               │ old-k8s-version-424086       │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:24 UTC │
	│ pause   │ -p old-k8s-version-424086 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-424086       │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:24:29
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:24:29.463157  331193 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:24:29.463277  331193 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:24:29.463288  331193 out.go:374] Setting ErrFile to fd 2...
	I1210 06:24:29.463295  331193 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:24:29.463635  331193 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8832/.minikube/bin
	I1210 06:24:29.464064  331193 out.go:368] Setting JSON to false
	I1210 06:24:29.465328  331193 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4020,"bootTime":1765343849,"procs":352,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 06:24:29.465387  331193 start.go:143] virtualization: kvm guest
	I1210 06:24:29.467168  331193 out.go:179] * [default-k8s-diff-port-643991] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 06:24:29.472151  331193 notify.go:221] Checking for updates...
	I1210 06:24:29.472194  331193 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 06:24:29.473813  331193 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:24:29.475281  331193 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-8832/kubeconfig
	I1210 06:24:29.476775  331193 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-8832/.minikube
	I1210 06:24:29.480653  331193 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 06:24:29.482107  331193 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:24:29.527340  327833 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.611746156s)
	I1210 06:24:29.527500  327833 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.459286319s)
	I1210 06:24:29.527346  327833 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.630139588s)
	I1210 06:24:29.527606  327833 api_server.go:72] duration metric: took 2.847745576s to wait for apiserver process to appear ...
	I1210 06:24:29.527659  327833 api_server.go:88] waiting for apiserver healthz status ...
	I1210 06:24:29.527682  327833 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1210 06:24:29.529329  327833 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-133470 addons enable metrics-server
	
	I1210 06:24:29.538145  327833 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 06:24:29.538175  327833 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 06:24:29.542637  327833 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1210 06:24:29.483944  331193 config.go:182] Loaded profile config "default-k8s-diff-port-643991": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 06:24:29.484575  331193 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:24:29.511842  331193 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 06:24:29.511943  331193 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:24:29.570351  331193 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-10 06:24:29.560751805 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:24:29.570447  331193 docker.go:319] overlay module found
	I1210 06:24:29.573605  331193 out.go:179] * Using the docker driver based on existing profile
	I1210 06:24:29.575214  331193 start.go:309] selected driver: docker
	I1210 06:24:29.575232  331193 start.go:927] validating driver "docker" against &{Name:default-k8s-diff-port-643991 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-643991 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:24:29.575339  331193 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:24:29.575949  331193 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:24:29.633518  331193 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-10 06:24:29.623561147 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:24:29.633853  331193 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 06:24:29.633881  331193 cni.go:84] Creating CNI manager for ""
	I1210 06:24:29.633943  331193 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:24:29.634002  331193 start.go:353] cluster config:
	{Name:default-k8s-diff-port-643991 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-643991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:24:29.636111  331193 out.go:179] * Starting "default-k8s-diff-port-643991" primary control-plane node in "default-k8s-diff-port-643991" cluster
	I1210 06:24:29.637418  331193 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 06:24:29.638784  331193 out.go:179] * Pulling base image v0.0.48-1765319469-22089 ...
	I1210 06:24:29.639957  331193 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 06:24:29.640000  331193 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-8832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1210 06:24:29.640013  331193 cache.go:65] Caching tarball of preloaded images
	I1210 06:24:29.640039  331193 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon
	I1210 06:24:29.640097  331193 preload.go:238] Found /home/jenkins/minikube-integration/22089-8832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 06:24:29.640113  331193 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1210 06:24:29.640238  331193 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/default-k8s-diff-port-643991/config.json ...
	I1210 06:24:29.661567  331193 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon, skipping pull
	I1210 06:24:29.661588  331193 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca exists in daemon, skipping load
	I1210 06:24:29.661608  331193 cache.go:243] Successfully downloaded all kic artifacts
	I1210 06:24:29.661667  331193 start.go:360] acquireMachinesLock for default-k8s-diff-port-643991: {Name:mk370efe05d640ea21e9150c952c3b99e34124d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:24:29.661731  331193 start.go:364] duration metric: took 44.211µs to acquireMachinesLock for "default-k8s-diff-port-643991"
	I1210 06:24:29.661754  331193 start.go:96] Skipping create...Using existing machine configuration
	I1210 06:24:29.661764  331193 fix.go:54] fixHost starting: 
	I1210 06:24:29.661967  331193 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-643991 --format={{.State.Status}}
	I1210 06:24:29.681408  331193 fix.go:112] recreateIfNeeded on default-k8s-diff-port-643991: state=Stopped err=<nil>
	W1210 06:24:29.681439  331193 fix.go:138] unexpected machine state, will restart: <nil>
	W1210 06:24:26.193807  321295 pod_ready.go:104] pod "coredns-5dd5756b68-gmssk" is not "Ready", error: <nil>
	W1210 06:24:28.194574  321295 pod_ready.go:104] pod "coredns-5dd5756b68-gmssk" is not "Ready", error: <nil>
	I1210 06:24:28.402525  326955 addons.go:530] duration metric: took 2.602830079s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1210 06:24:28.892120  326955 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1210 06:24:28.899537  326955 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 06:24:28.899646  326955 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 06:24:29.391259  326955 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1210 06:24:29.397271  326955 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1210 06:24:29.398588  326955 api_server.go:141] control plane version: v1.35.0-beta.0
	I1210 06:24:29.398617  326955 api_server.go:131] duration metric: took 1.007407345s to wait for apiserver health ...
	I1210 06:24:29.398627  326955 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 06:24:29.402410  326955 system_pods.go:59] 8 kube-system pods found
	I1210 06:24:29.402442  326955 system_pods.go:61] "coredns-7d764666f9-hr4gk" [2d1d5353-6d76-4f61-9e66-12eee045a735] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 06:24:29.402450  326955 system_pods.go:61] "etcd-no-preload-713838" [38765820-1675-45e4-ac49-ebd982a5f5a1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 06:24:29.402456  326955 system_pods.go:61] "kindnet-28s4q" [55436b1b-68c3-4f73-8929-29ec9ae87ce6] Running
	I1210 06:24:29.402464  326955 system_pods.go:61] "kube-apiserver-no-preload-713838" [bfc05d05-cb3e-4380-be23-f5dd2c56ec7a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 06:24:29.402487  326955 system_pods.go:61] "kube-controller-manager-no-preload-713838" [e2fa176c-fbca-43a7-aa7d-84b29a495f53] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 06:24:29.402494  326955 system_pods.go:61] "kube-proxy-c62hk" [b48eb137-310e-4bea-a99e-bb776ad77807] Running
	I1210 06:24:29.402502  326955 system_pods.go:61] "kube-scheduler-no-preload-713838" [c61624ba-385e-49d5-8008-fa1083a22b1a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 06:24:29.402510  326955 system_pods.go:61] "storage-provisioner" [e89d4b38-da41-4612-8cf9-1440b142a9af] Running
	I1210 06:24:29.402519  326955 system_pods.go:74] duration metric: took 3.884956ms to wait for pod list to return data ...
	I1210 06:24:29.402528  326955 default_sa.go:34] waiting for default service account to be created ...
	I1210 06:24:29.405488  326955 default_sa.go:45] found service account: "default"
	I1210 06:24:29.405513  326955 default_sa.go:55] duration metric: took 2.978219ms for default service account to be created ...
	I1210 06:24:29.405524  326955 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 06:24:29.408711  326955 system_pods.go:86] 8 kube-system pods found
	I1210 06:24:29.408748  326955 system_pods.go:89] "coredns-7d764666f9-hr4gk" [2d1d5353-6d76-4f61-9e66-12eee045a735] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 06:24:29.408760  326955 system_pods.go:89] "etcd-no-preload-713838" [38765820-1675-45e4-ac49-ebd982a5f5a1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 06:24:29.408771  326955 system_pods.go:89] "kindnet-28s4q" [55436b1b-68c3-4f73-8929-29ec9ae87ce6] Running
	I1210 06:24:29.408781  326955 system_pods.go:89] "kube-apiserver-no-preload-713838" [bfc05d05-cb3e-4380-be23-f5dd2c56ec7a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 06:24:29.408795  326955 system_pods.go:89] "kube-controller-manager-no-preload-713838" [e2fa176c-fbca-43a7-aa7d-84b29a495f53] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 06:24:29.408806  326955 system_pods.go:89] "kube-proxy-c62hk" [b48eb137-310e-4bea-a99e-bb776ad77807] Running
	I1210 06:24:29.408815  326955 system_pods.go:89] "kube-scheduler-no-preload-713838" [c61624ba-385e-49d5-8008-fa1083a22b1a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 06:24:29.408826  326955 system_pods.go:89] "storage-provisioner" [e89d4b38-da41-4612-8cf9-1440b142a9af] Running
	I1210 06:24:29.408835  326955 system_pods.go:126] duration metric: took 3.303792ms to wait for k8s-apps to be running ...
	I1210 06:24:29.408843  326955 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 06:24:29.408886  326955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:24:29.427035  326955 system_svc.go:56] duration metric: took 18.181731ms WaitForService to wait for kubelet
	I1210 06:24:29.427067  326955 kubeadm.go:587] duration metric: took 3.627496643s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 06:24:29.427089  326955 node_conditions.go:102] verifying NodePressure condition ...
	I1210 06:24:29.430862  326955 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1210 06:24:29.430891  326955 node_conditions.go:123] node cpu capacity is 8
	I1210 06:24:29.430908  326955 node_conditions.go:105] duration metric: took 3.813169ms to run NodePressure ...
	I1210 06:24:29.430923  326955 start.go:242] waiting for startup goroutines ...
	I1210 06:24:29.430932  326955 start.go:247] waiting for cluster config update ...
	I1210 06:24:29.430945  326955 start.go:256] writing updated cluster config ...
	I1210 06:24:29.431251  326955 ssh_runner.go:195] Run: rm -f paused
	I1210 06:24:29.436265  326955 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 06:24:29.440395  326955 pod_ready.go:83] waiting for pod "coredns-7d764666f9-hr4gk" in "kube-system" namespace to be "Ready" or be gone ...
	W1210 06:24:31.446014  326955 pod_ready.go:104] pod "coredns-7d764666f9-hr4gk" is not "Ready", error: <nil>
	I1210 06:24:29.543898  327833 addons.go:530] duration metric: took 2.863978518s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1210 06:24:30.028267  327833 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1210 06:24:30.033614  327833 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1210 06:24:30.034748  327833 api_server.go:141] control plane version: v1.34.2
	I1210 06:24:30.034783  327833 api_server.go:131] duration metric: took 507.112225ms to wait for apiserver health ...
	I1210 06:24:30.034795  327833 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 06:24:30.038895  327833 system_pods.go:59] 8 kube-system pods found
	I1210 06:24:30.038939  327833 system_pods.go:61] "coredns-66bc5c9577-gw75x" [e735e195-23a6-4d4f-9d07-f49ed4f8e1ee] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 06:24:30.038952  327833 system_pods.go:61] "etcd-embed-certs-133470" [25d5119b-7d9d-4093-abab-f0d2a4164472] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 06:24:30.038965  327833 system_pods.go:61] "kindnet-zhm6w" [7ab9de47-d8c7-438f-892e-28d2c4fd45b8] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1210 06:24:30.038975  327833 system_pods.go:61] "kube-apiserver-embed-certs-133470" [c32afacf-80f8-4b1c-814f-80da0f251890] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 06:24:30.038987  327833 system_pods.go:61] "kube-controller-manager-embed-certs-133470" [c26fa721-528e-4575-bd96-ae3cb1e0f65e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 06:24:30.039000  327833 system_pods.go:61] "kube-proxy-fkdk9" [e4897efd-dd92-4bec-8784-0352ec933eba] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1210 06:24:30.039018  327833 system_pods.go:61] "kube-scheduler-embed-certs-133470" [f745fb12-43ce-46e0-8965-eb2683233045] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 06:24:30.039031  327833 system_pods.go:61] "storage-provisioner" [fc2a0a30-365d-40a5-9f1a-bc551e6beec4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 06:24:30.039043  327833 system_pods.go:74] duration metric: took 4.241151ms to wait for pod list to return data ...
	I1210 06:24:30.039063  327833 default_sa.go:34] waiting for default service account to be created ...
	I1210 06:24:30.041895  327833 default_sa.go:45] found service account: "default"
	I1210 06:24:30.041924  327833 default_sa.go:55] duration metric: took 2.850805ms for default service account to be created ...
	I1210 06:24:30.041937  327833 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 06:24:30.045212  327833 system_pods.go:86] 8 kube-system pods found
	I1210 06:24:30.045241  327833 system_pods.go:89] "coredns-66bc5c9577-gw75x" [e735e195-23a6-4d4f-9d07-f49ed4f8e1ee] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 06:24:30.045272  327833 system_pods.go:89] "etcd-embed-certs-133470" [25d5119b-7d9d-4093-abab-f0d2a4164472] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 06:24:30.045280  327833 system_pods.go:89] "kindnet-zhm6w" [7ab9de47-d8c7-438f-892e-28d2c4fd45b8] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1210 06:24:30.045291  327833 system_pods.go:89] "kube-apiserver-embed-certs-133470" [c32afacf-80f8-4b1c-814f-80da0f251890] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 06:24:30.045296  327833 system_pods.go:89] "kube-controller-manager-embed-certs-133470" [c26fa721-528e-4575-bd96-ae3cb1e0f65e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 06:24:30.045303  327833 system_pods.go:89] "kube-proxy-fkdk9" [e4897efd-dd92-4bec-8784-0352ec933eba] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1210 06:24:30.045311  327833 system_pods.go:89] "kube-scheduler-embed-certs-133470" [f745fb12-43ce-46e0-8965-eb2683233045] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 06:24:30.045330  327833 system_pods.go:89] "storage-provisioner" [fc2a0a30-365d-40a5-9f1a-bc551e6beec4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 06:24:30.045342  327833 system_pods.go:126] duration metric: took 3.399379ms to wait for k8s-apps to be running ...
	I1210 06:24:30.045350  327833 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 06:24:30.045387  327833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:24:30.060920  327833 system_svc.go:56] duration metric: took 15.559546ms WaitForService to wait for kubelet
	I1210 06:24:30.060953  327833 kubeadm.go:587] duration metric: took 3.381092368s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 06:24:30.060974  327833 node_conditions.go:102] verifying NodePressure condition ...
	I1210 06:24:30.064243  327833 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1210 06:24:30.064277  327833 node_conditions.go:123] node cpu capacity is 8
	I1210 06:24:30.064294  327833 node_conditions.go:105] duration metric: took 3.313458ms to run NodePressure ...
	I1210 06:24:30.064309  327833 start.go:242] waiting for startup goroutines ...
	I1210 06:24:30.064321  327833 start.go:247] waiting for cluster config update ...
	I1210 06:24:30.064331  327833 start.go:256] writing updated cluster config ...
	I1210 06:24:30.064637  327833 ssh_runner.go:195] Run: rm -f paused
	I1210 06:24:30.069283  327833 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 06:24:30.073958  327833 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gw75x" in "kube-system" namespace to be "Ready" or be gone ...
	W1210 06:24:32.079411  327833 pod_ready.go:104] pod "coredns-66bc5c9577-gw75x" is not "Ready", error: <nil>
	W1210 06:24:34.080413  327833 pod_ready.go:104] pod "coredns-66bc5c9577-gw75x" is not "Ready", error: <nil>
	I1210 06:24:29.683735  331193 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-643991" ...
	I1210 06:24:29.683810  331193 cli_runner.go:164] Run: docker start default-k8s-diff-port-643991
	I1210 06:24:29.979666  331193 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-643991 --format={{.State.Status}}
	I1210 06:24:30.003869  331193 kic.go:430] container "default-k8s-diff-port-643991" state is running.
	I1210 06:24:30.004450  331193 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-643991
	I1210 06:24:30.028650  331193 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/default-k8s-diff-port-643991/config.json ...
	I1210 06:24:30.028920  331193 machine.go:94] provisionDockerMachine start ...
	I1210 06:24:30.029007  331193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-643991
	I1210 06:24:30.052032  331193 main.go:143] libmachine: Using SSH client type: native
	I1210 06:24:30.052343  331193 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1210 06:24:30.052360  331193 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 06:24:30.053182  331193 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42978->127.0.0.1:33129: read: connection reset by peer
	I1210 06:24:33.203860  331193 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-643991
	
	I1210 06:24:33.203892  331193 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-643991"
	I1210 06:24:33.203955  331193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-643991
	I1210 06:24:33.234171  331193 main.go:143] libmachine: Using SSH client type: native
	I1210 06:24:33.234533  331193 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1210 06:24:33.234551  331193 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-643991 && echo "default-k8s-diff-port-643991" | sudo tee /etc/hostname
	I1210 06:24:33.411300  331193 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-643991
	
	I1210 06:24:33.411628  331193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-643991
	I1210 06:24:33.438798  331193 main.go:143] libmachine: Using SSH client type: native
	I1210 06:24:33.439123  331193 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1210 06:24:33.439155  331193 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-643991' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-643991/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-643991' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 06:24:33.601272  331193 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:24:33.601323  331193 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22089-8832/.minikube CaCertPath:/home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22089-8832/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22089-8832/.minikube}
	I1210 06:24:33.601369  331193 ubuntu.go:190] setting up certificates
	I1210 06:24:33.601379  331193 provision.go:84] configureAuth start
	I1210 06:24:33.601459  331193 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-643991
	I1210 06:24:33.628451  331193 provision.go:143] copyHostCerts
	I1210 06:24:33.628540  331193 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-8832/.minikube/ca.pem, removing ...
	I1210 06:24:33.628555  331193 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-8832/.minikube/ca.pem
	I1210 06:24:33.628639  331193 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22089-8832/.minikube/ca.pem (1078 bytes)
	I1210 06:24:33.628745  331193 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-8832/.minikube/cert.pem, removing ...
	I1210 06:24:33.628752  331193 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-8832/.minikube/cert.pem
	I1210 06:24:33.628788  331193 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22089-8832/.minikube/cert.pem (1123 bytes)
	I1210 06:24:33.628857  331193 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-8832/.minikube/key.pem, removing ...
	I1210 06:24:33.628863  331193 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-8832/.minikube/key.pem
	I1210 06:24:33.628899  331193 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22089-8832/.minikube/key.pem (1675 bytes)
	I1210 06:24:33.628975  331193 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22089-8832/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-643991 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-643991 localhost minikube]
	I1210 06:24:33.955160  331193 provision.go:177] copyRemoteCerts
	I1210 06:24:33.955246  331193 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 06:24:33.955303  331193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-643991
	I1210 06:24:33.981995  331193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/default-k8s-diff-port-643991/id_rsa Username:docker}
	I1210 06:24:34.096392  331193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 06:24:34.125757  331193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1210 06:24:34.152687  331193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 06:24:34.177938  331193 provision.go:87] duration metric: took 576.537673ms to configureAuth
	I1210 06:24:34.177967  331193 ubuntu.go:206] setting minikube options for container-runtime
	I1210 06:24:34.178176  331193 config.go:182] Loaded profile config "default-k8s-diff-port-643991": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 06:24:34.178297  331193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-643991
	I1210 06:24:34.205617  331193 main.go:143] libmachine: Using SSH client type: native
	I1210 06:24:34.205916  331193 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1210 06:24:34.205954  331193 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	W1210 06:24:30.194799  321295 pod_ready.go:104] pod "coredns-5dd5756b68-gmssk" is not "Ready", error: <nil>
	W1210 06:24:32.695060  321295 pod_ready.go:104] pod "coredns-5dd5756b68-gmssk" is not "Ready", error: <nil>
	W1210 06:24:34.802354  321295 pod_ready.go:104] pod "coredns-5dd5756b68-gmssk" is not "Ready", error: <nil>
	W1210 06:24:33.452082  326955 pod_ready.go:104] pod "coredns-7d764666f9-hr4gk" is not "Ready", error: <nil>
	W1210 06:24:35.968068  326955 pod_ready.go:104] pod "coredns-7d764666f9-hr4gk" is not "Ready", error: <nil>
	I1210 06:24:36.195848  321295 pod_ready.go:94] pod "coredns-5dd5756b68-gmssk" is "Ready"
	I1210 06:24:36.195877  321295 pod_ready.go:86] duration metric: took 30.508462663s for pod "coredns-5dd5756b68-gmssk" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:24:36.199954  321295 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-424086" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:24:36.205759  321295 pod_ready.go:94] pod "etcd-old-k8s-version-424086" is "Ready"
	I1210 06:24:36.205791  321295 pod_ready.go:86] duration metric: took 5.808338ms for pod "etcd-old-k8s-version-424086" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:24:36.210592  321295 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-424086" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:24:36.218767  321295 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-424086" is "Ready"
	I1210 06:24:36.218885  321295 pod_ready.go:86] duration metric: took 8.266164ms for pod "kube-apiserver-old-k8s-version-424086" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:24:36.224034  321295 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-424086" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:24:36.391647  321295 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-424086" is "Ready"
	I1210 06:24:36.391679  321295 pod_ready.go:86] duration metric: took 167.619952ms for pod "kube-controller-manager-old-k8s-version-424086" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:24:36.592384  321295 pod_ready.go:83] waiting for pod "kube-proxy-v9pgf" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:24:36.992365  321295 pod_ready.go:94] pod "kube-proxy-v9pgf" is "Ready"
	I1210 06:24:36.992397  321295 pod_ready.go:86] duration metric: took 399.981945ms for pod "kube-proxy-v9pgf" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:24:37.193251  321295 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-424086" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:24:37.591501  321295 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-424086" is "Ready"
	I1210 06:24:37.591532  321295 pod_ready.go:86] duration metric: took 398.256722ms for pod "kube-scheduler-old-k8s-version-424086" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:24:37.591548  321295 pod_ready.go:40] duration metric: took 31.910201225s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 06:24:37.662964  321295 start.go:625] kubectl: 1.34.3, cluster: 1.28.0 (minor skew: 6)
	I1210 06:24:37.703930  321295 out.go:203] 
	W1210 06:24:37.707593  321295 out.go:285] ! /usr/local/bin/kubectl is version 1.34.3, which may have incompatibilities with Kubernetes 1.28.0.
	I1210 06:24:37.712692  321295 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1210 06:24:37.732833  321295 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-424086" cluster and "default" namespace by default
	I1210 06:24:35.547354  331193 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 06:24:35.547379  331193 machine.go:97] duration metric: took 5.518440159s to provisionDockerMachine
	I1210 06:24:35.547393  331193 start.go:293] postStartSetup for "default-k8s-diff-port-643991" (driver="docker")
	I1210 06:24:35.547407  331193 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 06:24:35.547499  331193 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 06:24:35.547554  331193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-643991
	I1210 06:24:35.575350  331193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/default-k8s-diff-port-643991/id_rsa Username:docker}
	I1210 06:24:35.697945  331193 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 06:24:35.705182  331193 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 06:24:35.705215  331193 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 06:24:35.705229  331193 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-8832/.minikube/addons for local assets ...
	I1210 06:24:35.705290  331193 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-8832/.minikube/files for local assets ...
	I1210 06:24:35.705526  331193 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-8832/.minikube/files/etc/ssl/certs/123742.pem -> 123742.pem in /etc/ssl/certs
	I1210 06:24:35.705704  331193 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 06:24:35.718185  331193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/files/etc/ssl/certs/123742.pem --> /etc/ssl/certs/123742.pem (1708 bytes)
	I1210 06:24:35.745926  331193 start.go:296] duration metric: took 198.518886ms for postStartSetup
	I1210 06:24:35.746085  331193 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:24:35.746147  331193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-643991
	I1210 06:24:35.777230  331193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/default-k8s-diff-port-643991/id_rsa Username:docker}
	I1210 06:24:35.889165  331193 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 06:24:35.896174  331193 fix.go:56] duration metric: took 6.234403861s for fixHost
	I1210 06:24:35.896201  331193 start.go:83] releasing machines lock for "default-k8s-diff-port-643991", held for 6.234458178s
	I1210 06:24:35.896265  331193 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-643991
	I1210 06:24:35.923656  331193 ssh_runner.go:195] Run: cat /version.json
	I1210 06:24:35.923728  331193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-643991
	I1210 06:24:35.924007  331193 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 06:24:35.924105  331193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-643991
	I1210 06:24:35.957157  331193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/default-k8s-diff-port-643991/id_rsa Username:docker}
	I1210 06:24:35.965882  331193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/default-k8s-diff-port-643991/id_rsa Username:docker}
	I1210 06:24:36.073311  331193 ssh_runner.go:195] Run: systemctl --version
	I1210 06:24:36.164641  331193 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 06:24:36.224004  331193 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 06:24:36.231272  331193 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 06:24:36.231368  331193 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 06:24:36.243888  331193 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 06:24:36.243910  331193 start.go:496] detecting cgroup driver to use...
	I1210 06:24:36.243938  331193 detect.go:190] detected "systemd" cgroup driver on host os
	I1210 06:24:36.243972  331193 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 06:24:36.265874  331193 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 06:24:36.286972  331193 docker.go:218] disabling cri-docker service (if available) ...
	I1210 06:24:36.287031  331193 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 06:24:36.310408  331193 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 06:24:36.331188  331193 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 06:24:36.456751  331193 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 06:24:36.580985  331193 docker.go:234] disabling docker service ...
	I1210 06:24:36.581051  331193 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 06:24:36.601430  331193 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 06:24:36.619968  331193 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 06:24:36.740370  331193 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 06:24:36.855939  331193 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 06:24:36.874056  331193 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:24:36.894784  331193 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 06:24:36.894865  331193 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:24:36.907755  331193 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1210 06:24:36.907822  331193 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:24:36.921121  331193 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:24:36.936721  331193 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:24:36.953833  331193 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 06:24:36.965996  331193 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:24:36.978309  331193 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:24:36.990625  331193 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:24:37.003743  331193 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 06:24:37.014928  331193 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 06:24:37.025044  331193 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:24:37.152253  331193 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 06:24:38.178819  331193 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.026527319s)
	I1210 06:24:38.178854  331193 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 06:24:38.178926  331193 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 06:24:38.184419  331193 start.go:564] Will wait 60s for crictl version
	I1210 06:24:38.184506  331193 ssh_runner.go:195] Run: which crictl
	I1210 06:24:38.190349  331193 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 06:24:38.226300  331193 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 06:24:38.226386  331193 ssh_runner.go:195] Run: crio --version
	I1210 06:24:38.271269  331193 ssh_runner.go:195] Run: crio --version
	I1210 06:24:38.315102  331193 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	W1210 06:24:36.083821  327833 pod_ready.go:104] pod "coredns-66bc5c9577-gw75x" is not "Ready", error: <nil>
	W1210 06:24:38.084410  327833 pod_ready.go:104] pod "coredns-66bc5c9577-gw75x" is not "Ready", error: <nil>
	I1210 06:24:38.316794  331193 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-643991 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:24:38.341392  331193 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1210 06:24:38.346769  331193 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:24:38.361200  331193 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-643991 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-643991 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 06:24:38.361325  331193 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 06:24:38.361375  331193 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:24:38.403623  331193 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 06:24:38.403651  331193 crio.go:433] Images already preloaded, skipping extraction
	I1210 06:24:38.403722  331193 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:24:38.442146  331193 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 06:24:38.442169  331193 cache_images.go:86] Images are preloaded, skipping loading
	I1210 06:24:38.442178  331193 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.34.2 crio true true} ...
	I1210 06:24:38.442300  331193 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-643991 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-643991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 06:24:38.442375  331193 ssh_runner.go:195] Run: crio config
	I1210 06:24:38.505002  331193 cni.go:84] Creating CNI manager for ""
	I1210 06:24:38.505028  331193 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:24:38.505047  331193 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 06:24:38.505073  331193 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-643991 NodeName:default-k8s-diff-port-643991 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 06:24:38.505241  331193 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-643991"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 06:24:38.505319  331193 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1210 06:24:38.517508  331193 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 06:24:38.517583  331193 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 06:24:38.529600  331193 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1210 06:24:38.550178  331193 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 06:24:38.571541  331193 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1210 06:24:38.590976  331193 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1210 06:24:38.596252  331193 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:24:38.610456  331193 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:24:38.717727  331193 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:24:38.756666  331193 certs.go:69] Setting up /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/default-k8s-diff-port-643991 for IP: 192.168.76.2
	I1210 06:24:38.756688  331193 certs.go:195] generating shared ca certs ...
	I1210 06:24:38.756706  331193 certs.go:227] acquiring lock for ca certs: {Name:mkfe434cecfa5233603e8d01fb39a21abb4f8ca4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:24:38.756845  331193 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22089-8832/.minikube/ca.key
	I1210 06:24:38.756911  331193 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22089-8832/.minikube/proxy-client-ca.key
	I1210 06:24:38.756931  331193 certs.go:257] generating profile certs ...
	I1210 06:24:38.757041  331193 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/default-k8s-diff-port-643991/client.key
	I1210 06:24:38.757134  331193 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/default-k8s-diff-port-643991/apiserver.key.a53e5786
	I1210 06:24:38.757192  331193 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/default-k8s-diff-port-643991/proxy-client.key
	I1210 06:24:38.757410  331193 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/12374.pem (1338 bytes)
	W1210 06:24:38.757463  331193 certs.go:480] ignoring /home/jenkins/minikube-integration/22089-8832/.minikube/certs/12374_empty.pem, impossibly tiny 0 bytes
	I1210 06:24:38.757506  331193 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca-key.pem (1679 bytes)
	I1210 06:24:38.757546  331193 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem (1078 bytes)
	I1210 06:24:38.757580  331193 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/cert.pem (1123 bytes)
	I1210 06:24:38.758072  331193 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/key.pem (1675 bytes)
	I1210 06:24:38.758191  331193 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/files/etc/ssl/certs/123742.pem (1708 bytes)
	I1210 06:24:38.759567  331193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 06:24:38.786970  331193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 06:24:38.814783  331193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 06:24:38.841683  331193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 06:24:38.870316  331193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/default-k8s-diff-port-643991/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1210 06:24:38.895382  331193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/default-k8s-diff-port-643991/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 06:24:38.922826  331193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/default-k8s-diff-port-643991/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 06:24:38.950738  331193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/default-k8s-diff-port-643991/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 06:24:38.977189  331193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/files/etc/ssl/certs/123742.pem --> /usr/share/ca-certificates/123742.pem (1708 bytes)
	I1210 06:24:39.006990  331193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 06:24:39.035295  331193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/certs/12374.pem --> /usr/share/ca-certificates/12374.pem (1338 bytes)
	I1210 06:24:39.062103  331193 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 06:24:39.080812  331193 ssh_runner.go:195] Run: openssl version
	I1210 06:24:39.088878  331193 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:24:39.100012  331193 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 06:24:39.111019  331193 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:24:39.117095  331193 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:24:39.117163  331193 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:24:39.174768  331193 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 06:24:39.186404  331193 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12374.pem
	I1210 06:24:39.197217  331193 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12374.pem /etc/ssl/certs/12374.pem
	I1210 06:24:39.208928  331193 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12374.pem
	I1210 06:24:39.214853  331193 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:52 /usr/share/ca-certificates/12374.pem
	I1210 06:24:39.214920  331193 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12374.pem
	I1210 06:24:39.274055  331193 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 06:24:39.284866  331193 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/123742.pem
	I1210 06:24:39.295412  331193 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/123742.pem /etc/ssl/certs/123742.pem
	I1210 06:24:39.308005  331193 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/123742.pem
	I1210 06:24:39.314292  331193 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:52 /usr/share/ca-certificates/123742.pem
	I1210 06:24:39.314365  331193 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/123742.pem
	I1210 06:24:39.374427  331193 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 06:24:39.386867  331193 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:24:39.392763  331193 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 06:24:39.452683  331193 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 06:24:39.512535  331193 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 06:24:39.562115  331193 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 06:24:39.622216  331193 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 06:24:39.684511  331193 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 06:24:39.746211  331193 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-643991 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-643991 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:24:39.746308  331193 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 06:24:39.746381  331193 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:24:39.788828  331193 cri.go:89] found id: "8cb208db605620bb50e399feca07150e2a59edcd3b1bef56613bc9bf58d33577"
	I1210 06:24:39.788851  331193 cri.go:89] found id: "939f270b7e90898a7f21a52e2572b0814d28cd556fbbc16d377a84363bcff231"
	I1210 06:24:39.788856  331193 cri.go:89] found id: "9b258fc04f844289ade513f0963c9827dce6e9c67835e2e2ffc484b28ca58cb9"
	I1210 06:24:39.788861  331193 cri.go:89] found id: "e3522bb390040c1d32dccb4cfcacd9939770bc3064f9bb9dac4051ec77431f13"
	I1210 06:24:39.788866  331193 cri.go:89] found id: ""
	I1210 06:24:39.788911  331193 ssh_runner.go:195] Run: sudo runc list -f json
	W1210 06:24:39.805869  331193 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:24:39Z" level=error msg="open /run/runc: no such file or directory"
	I1210 06:24:39.805936  331193 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 06:24:39.818381  331193 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 06:24:39.818402  331193 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 06:24:39.818587  331193 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 06:24:39.829879  331193 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:24:39.831202  331193 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-643991" does not appear in /home/jenkins/minikube-integration/22089-8832/kubeconfig
	I1210 06:24:39.832361  331193 kubeconfig.go:62] /home/jenkins/minikube-integration/22089-8832/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-643991" cluster setting kubeconfig missing "default-k8s-diff-port-643991" context setting]
	I1210 06:24:39.833794  331193 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/kubeconfig: {Name:mk2d0febd8c6a30a71f02d20e2057fd6d147cd76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:24:39.836442  331193 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 06:24:39.847872  331193 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1210 06:24:39.848026  331193 kubeadm.go:602] duration metric: took 29.616854ms to restartPrimaryControlPlane
	I1210 06:24:39.848051  331193 kubeadm.go:403] duration metric: took 101.849914ms to StartCluster
	I1210 06:24:39.848095  331193 settings.go:142] acquiring lock: {Name:mkcfa52e2e09cf8266d26c2d1d1f162454a79515 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:24:39.848178  331193 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22089-8832/kubeconfig
	I1210 06:24:39.850942  331193 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/kubeconfig: {Name:mk2d0febd8c6a30a71f02d20e2057fd6d147cd76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:24:39.851362  331193 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 06:24:39.851496  331193 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 06:24:39.851611  331193 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-643991"
	I1210 06:24:39.851628  331193 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-643991"
	I1210 06:24:39.851633  331193 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-643991"
	I1210 06:24:39.851638  331193 config.go:182] Loaded profile config "default-k8s-diff-port-643991": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	W1210 06:24:39.851644  331193 addons.go:248] addon storage-provisioner should already be in state true
	I1210 06:24:39.851652  331193 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-643991"
	W1210 06:24:39.851662  331193 addons.go:248] addon dashboard should already be in state true
	I1210 06:24:39.851673  331193 host.go:66] Checking if "default-k8s-diff-port-643991" exists ...
	I1210 06:24:39.851691  331193 host.go:66] Checking if "default-k8s-diff-port-643991" exists ...
	I1210 06:24:39.851993  331193 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-643991"
	I1210 06:24:39.852013  331193 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-643991"
	I1210 06:24:39.852174  331193 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-643991 --format={{.State.Status}}
	I1210 06:24:39.852201  331193 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-643991 --format={{.State.Status}}
	I1210 06:24:39.852289  331193 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-643991 --format={{.State.Status}}
	I1210 06:24:39.855413  331193 out.go:179] * Verifying Kubernetes components...
	I1210 06:24:39.858890  331193 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:24:39.884040  331193 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1210 06:24:39.884934  331193 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-643991"
	W1210 06:24:39.884961  331193 addons.go:248] addon default-storageclass should already be in state true
	I1210 06:24:39.884990  331193 host.go:66] Checking if "default-k8s-diff-port-643991" exists ...
	I1210 06:24:39.885883  331193 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-643991 --format={{.State.Status}}
	I1210 06:24:39.887424  331193 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:24:39.888523  331193 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1210 06:24:39.889694  331193 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:24:39.889718  331193 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 06:24:39.889780  331193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-643991
	I1210 06:24:39.890593  331193 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1210 06:24:39.890617  331193 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1210 06:24:39.890675  331193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-643991
	I1210 06:24:39.926093  331193 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 06:24:39.926126  331193 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 06:24:39.926186  331193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-643991
	I1210 06:24:39.935798  331193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/default-k8s-diff-port-643991/id_rsa Username:docker}
	I1210 06:24:39.941129  331193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/default-k8s-diff-port-643991/id_rsa Username:docker}
	I1210 06:24:39.960213  331193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/default-k8s-diff-port-643991/id_rsa Username:docker}
	I1210 06:24:40.048204  331193 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:24:40.067806  331193 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:24:40.070912  331193 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-643991" to be "Ready" ...
	I1210 06:24:40.078811  331193 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1210 06:24:40.078845  331193 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1210 06:24:40.106062  331193 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:24:40.109845  331193 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1210 06:24:40.109871  331193 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1210 06:24:40.155735  331193 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1210 06:24:40.155775  331193 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1210 06:24:40.192253  331193 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1210 06:24:40.192381  331193 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1210 06:24:40.214622  331193 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1210 06:24:40.214644  331193 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1210 06:24:40.237925  331193 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1210 06:24:40.237952  331193 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1210 06:24:40.264863  331193 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1210 06:24:40.264929  331193 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1210 06:24:40.285257  331193 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1210 06:24:40.285289  331193 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1210 06:24:40.307998  331193 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 06:24:40.308032  331193 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1210 06:24:40.328972  331193 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 06:24:41.274920  331193 node_ready.go:49] node "default-k8s-diff-port-643991" is "Ready"
	I1210 06:24:41.274963  331193 node_ready.go:38] duration metric: took 1.204019067s for node "default-k8s-diff-port-643991" to be "Ready" ...
	I1210 06:24:41.274982  331193 api_server.go:52] waiting for apiserver process to appear ...
	I1210 06:24:41.275043  331193 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:24:41.839753  331193 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.771911029s)
	I1210 06:24:41.839816  331193 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.733712187s)
	I1210 06:24:41.839916  331193 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.51090615s)
	I1210 06:24:41.839980  331193 api_server.go:72] duration metric: took 1.988583505s to wait for apiserver process to appear ...
	I1210 06:24:41.840190  331193 api_server.go:88] waiting for apiserver healthz status ...
	I1210 06:24:41.840212  331193 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1210 06:24:41.842119  331193 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-643991 addons enable metrics-server
	
	I1210 06:24:41.845927  331193 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 06:24:41.845957  331193 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 06:24:41.850363  331193 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1210 06:24:37.982141  326955 pod_ready.go:104] pod "coredns-7d764666f9-hr4gk" is not "Ready", error: <nil>
	W1210 06:24:40.446587  326955 pod_ready.go:104] pod "coredns-7d764666f9-hr4gk" is not "Ready", error: <nil>
	W1210 06:24:42.447261  326955 pod_ready.go:104] pod "coredns-7d764666f9-hr4gk" is not "Ready", error: <nil>
	W1210 06:24:40.087071  327833 pod_ready.go:104] pod "coredns-66bc5c9577-gw75x" is not "Ready", error: <nil>
	W1210 06:24:42.580290  327833 pod_ready.go:104] pod "coredns-66bc5c9577-gw75x" is not "Ready", error: <nil>
	I1210 06:24:41.851971  331193 addons.go:530] duration metric: took 2.000507294s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1210 06:24:42.340651  331193 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1210 06:24:42.345283  331193 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 06:24:42.345337  331193 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 06:24:42.841003  331193 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1210 06:24:42.845358  331193 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1210 06:24:42.846546  331193 api_server.go:141] control plane version: v1.34.2
	I1210 06:24:42.846575  331193 api_server.go:131] duration metric: took 1.006376692s to wait for apiserver health ...
	I1210 06:24:42.846585  331193 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 06:24:42.850254  331193 system_pods.go:59] 8 kube-system pods found
	I1210 06:24:42.850305  331193 system_pods.go:61] "coredns-66bc5c9577-znsz6" [e151b597-32ae-4033-8ce6-fc3d9efd72b2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 06:24:42.850315  331193 system_pods.go:61] "etcd-default-k8s-diff-port-643991" [d45a67d5-7ee5-4f45-bef2-491ce1204cde] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 06:24:42.850324  331193 system_pods.go:61] "kindnet-7j6ns" [a757a831-3437-4844-a84f-3eb2b8d6dad5] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1210 06:24:42.850330  331193 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-643991" [3f4ebf3d-40e0-4a3b-bff1-90f5f486cab9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 06:24:42.850337  331193 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-643991" [6955b6b4-7da0-4c20-8ab9-899868eca432] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 06:24:42.850343  331193 system_pods.go:61] "kube-proxy-mkpzc" [f4ed478e-05fc-4161-ae59-666311f1a620] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1210 06:24:42.850355  331193 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-643991" [29f8dbc9-8a3b-45f2-b54f-df593f38ab0f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 06:24:42.850367  331193 system_pods.go:61] "storage-provisioner" [dc38e64c-cf9f-42d4-a886-014f884f425d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 06:24:42.850375  331193 system_pods.go:74] duration metric: took 3.783492ms to wait for pod list to return data ...
	I1210 06:24:42.850387  331193 default_sa.go:34] waiting for default service account to be created ...
	I1210 06:24:42.852891  331193 default_sa.go:45] found service account: "default"
	I1210 06:24:42.852913  331193 default_sa.go:55] duration metric: took 2.520509ms for default service account to be created ...
	I1210 06:24:42.852921  331193 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 06:24:42.855638  331193 system_pods.go:86] 8 kube-system pods found
	I1210 06:24:42.855665  331193 system_pods.go:89] "coredns-66bc5c9577-znsz6" [e151b597-32ae-4033-8ce6-fc3d9efd72b2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 06:24:42.855674  331193 system_pods.go:89] "etcd-default-k8s-diff-port-643991" [d45a67d5-7ee5-4f45-bef2-491ce1204cde] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 06:24:42.855682  331193 system_pods.go:89] "kindnet-7j6ns" [a757a831-3437-4844-a84f-3eb2b8d6dad5] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1210 06:24:42.855688  331193 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-643991" [3f4ebf3d-40e0-4a3b-bff1-90f5f486cab9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 06:24:42.855695  331193 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-643991" [6955b6b4-7da0-4c20-8ab9-899868eca432] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 06:24:42.855700  331193 system_pods.go:89] "kube-proxy-mkpzc" [f4ed478e-05fc-4161-ae59-666311f1a620] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1210 06:24:42.855706  331193 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-643991" [29f8dbc9-8a3b-45f2-b54f-df593f38ab0f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 06:24:42.855710  331193 system_pods.go:89] "storage-provisioner" [dc38e64c-cf9f-42d4-a886-014f884f425d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 06:24:42.855718  331193 system_pods.go:126] duration metric: took 2.791834ms to wait for k8s-apps to be running ...
	I1210 06:24:42.855727  331193 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 06:24:42.855774  331193 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:24:42.869368  331193 system_svc.go:56] duration metric: took 13.631016ms WaitForService to wait for kubelet
	I1210 06:24:42.869399  331193 kubeadm.go:587] duration metric: took 3.018002854s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 06:24:42.869427  331193 node_conditions.go:102] verifying NodePressure condition ...
	I1210 06:24:42.872414  331193 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1210 06:24:42.872446  331193 node_conditions.go:123] node cpu capacity is 8
	I1210 06:24:42.872463  331193 node_conditions.go:105] duration metric: took 3.02997ms to run NodePressure ...
	I1210 06:24:42.872490  331193 start.go:242] waiting for startup goroutines ...
	I1210 06:24:42.872498  331193 start.go:247] waiting for cluster config update ...
	I1210 06:24:42.872511  331193 start.go:256] writing updated cluster config ...
	I1210 06:24:42.872859  331193 ssh_runner.go:195] Run: rm -f paused
	I1210 06:24:42.877443  331193 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 06:24:42.881288  331193 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-znsz6" in "kube-system" namespace to be "Ready" or be gone ...
	W1210 06:24:44.947238  326955 pod_ready.go:104] pod "coredns-7d764666f9-hr4gk" is not "Ready", error: <nil>
	W1210 06:24:47.447498  326955 pod_ready.go:104] pod "coredns-7d764666f9-hr4gk" is not "Ready", error: <nil>
	W1210 06:24:45.079791  327833 pod_ready.go:104] pod "coredns-66bc5c9577-gw75x" is not "Ready", error: <nil>
	W1210 06:24:47.080378  327833 pod_ready.go:104] pod "coredns-66bc5c9577-gw75x" is not "Ready", error: <nil>
	W1210 06:24:49.080980  327833 pod_ready.go:104] pod "coredns-66bc5c9577-gw75x" is not "Ready", error: <nil>
	W1210 06:24:44.888282  331193 pod_ready.go:104] pod "coredns-66bc5c9577-znsz6" is not "Ready", error: <nil>
	W1210 06:24:47.387268  331193 pod_ready.go:104] pod "coredns-66bc5c9577-znsz6" is not "Ready", error: <nil>
	W1210 06:24:49.388359  331193 pod_ready.go:104] pod "coredns-66bc5c9577-znsz6" is not "Ready", error: <nil>
	W1210 06:24:49.448076  326955 pod_ready.go:104] pod "coredns-7d764666f9-hr4gk" is not "Ready", error: <nil>
	W1210 06:24:51.947304  326955 pod_ready.go:104] pod "coredns-7d764666f9-hr4gk" is not "Ready", error: <nil>
	W1210 06:24:51.579788  327833 pod_ready.go:104] pod "coredns-66bc5c9577-gw75x" is not "Ready", error: <nil>
	W1210 06:24:53.580057  327833 pod_ready.go:104] pod "coredns-66bc5c9577-gw75x" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 10 06:24:23 old-k8s-version-424086 crio[568]: time="2025-12-10T06:24:23.182755613Z" level=info msg="Created container 8b02c6ca7d4466db7f6c782b5cef77cc7d1b41833fc02837b2fbfa4014dcd4dc: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-gwx7s/kubernetes-dashboard" id=0fcf3f91-38c3-4fba-b3ea-0ec6a545da9c name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:24:23 old-k8s-version-424086 crio[568]: time="2025-12-10T06:24:23.183509967Z" level=info msg="Starting container: 8b02c6ca7d4466db7f6c782b5cef77cc7d1b41833fc02837b2fbfa4014dcd4dc" id=434080c6-c706-46bf-a91d-4f9d0f04a6ba name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 06:24:23 old-k8s-version-424086 crio[568]: time="2025-12-10T06:24:23.18552375Z" level=info msg="Started container" PID=1747 containerID=8b02c6ca7d4466db7f6c782b5cef77cc7d1b41833fc02837b2fbfa4014dcd4dc description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-gwx7s/kubernetes-dashboard id=434080c6-c706-46bf-a91d-4f9d0f04a6ba name=/runtime.v1.RuntimeService/StartContainer sandboxID=f5117b4231bcec78d4f64fead77dc454694eab39f9326a2c5a3d8d93aff92fe1
	Dec 10 06:24:35 old-k8s-version-424086 crio[568]: time="2025-12-10T06:24:35.616213543Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=d790219a-8569-47ce-807e-55cffe530ca4 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:24:35 old-k8s-version-424086 crio[568]: time="2025-12-10T06:24:35.619979235Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=88228445-7216-4b00-8ad4-43c1f16540e9 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:24:35 old-k8s-version-424086 crio[568]: time="2025-12-10T06:24:35.621571275Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=00411ec9-5328-46e2-9183-976d59896521 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:24:35 old-k8s-version-424086 crio[568]: time="2025-12-10T06:24:35.621721804Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:24:35 old-k8s-version-424086 crio[568]: time="2025-12-10T06:24:35.628251113Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:24:35 old-k8s-version-424086 crio[568]: time="2025-12-10T06:24:35.628550349Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/5c0f1429f8c9a8e68469e545ceb533d1165d796ac2ba33ee877f486d521b18f5/merged/etc/passwd: no such file or directory"
	Dec 10 06:24:35 old-k8s-version-424086 crio[568]: time="2025-12-10T06:24:35.628589304Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/5c0f1429f8c9a8e68469e545ceb533d1165d796ac2ba33ee877f486d521b18f5/merged/etc/group: no such file or directory"
	Dec 10 06:24:35 old-k8s-version-424086 crio[568]: time="2025-12-10T06:24:35.628906181Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:24:35 old-k8s-version-424086 crio[568]: time="2025-12-10T06:24:35.678099802Z" level=info msg="Created container 914c4088df00c31f369bdfe0e192e6636063078e58e9ec66a664954130a9142a: kube-system/storage-provisioner/storage-provisioner" id=00411ec9-5328-46e2-9183-976d59896521 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:24:35 old-k8s-version-424086 crio[568]: time="2025-12-10T06:24:35.67886329Z" level=info msg="Starting container: 914c4088df00c31f369bdfe0e192e6636063078e58e9ec66a664954130a9142a" id=c44b0fc3-26fa-4dee-8c98-e33f7c873cdc name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 06:24:35 old-k8s-version-424086 crio[568]: time="2025-12-10T06:24:35.681644208Z" level=info msg="Started container" PID=1771 containerID=914c4088df00c31f369bdfe0e192e6636063078e58e9ec66a664954130a9142a description=kube-system/storage-provisioner/storage-provisioner id=c44b0fc3-26fa-4dee-8c98-e33f7c873cdc name=/runtime.v1.RuntimeService/StartContainer sandboxID=9154e018aebbee1a98b28c2d4ff34f10668dd884489e2ccc2f268e49c3c69387
	Dec 10 06:24:38 old-k8s-version-424086 crio[568]: time="2025-12-10T06:24:38.492829357Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=b6e92ee6-692f-44ac-beaa-57ced53db679 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:24:38 old-k8s-version-424086 crio[568]: time="2025-12-10T06:24:38.494376182Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=eed97e98-96b4-4c51-917a-f8853c88465d name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:24:38 old-k8s-version-424086 crio[568]: time="2025-12-10T06:24:38.497277771Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-z4ftf/dashboard-metrics-scraper" id=767a8ed9-de6c-4884-b7f9-52b6cecac8fc name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:24:38 old-k8s-version-424086 crio[568]: time="2025-12-10T06:24:38.497877907Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:24:38 old-k8s-version-424086 crio[568]: time="2025-12-10T06:24:38.504695254Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:24:38 old-k8s-version-424086 crio[568]: time="2025-12-10T06:24:38.505374328Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:24:38 old-k8s-version-424086 crio[568]: time="2025-12-10T06:24:38.54050123Z" level=info msg="Created container 2391ccb16a41baf6874b7001b4ce1302fe76bd9c37f0aa3d9209904f2376550f: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-z4ftf/dashboard-metrics-scraper" id=767a8ed9-de6c-4884-b7f9-52b6cecac8fc name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:24:38 old-k8s-version-424086 crio[568]: time="2025-12-10T06:24:38.541235164Z" level=info msg="Starting container: 2391ccb16a41baf6874b7001b4ce1302fe76bd9c37f0aa3d9209904f2376550f" id=f1eed39d-00c3-416b-a124-eb81c0d34372 name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 06:24:38 old-k8s-version-424086 crio[568]: time="2025-12-10T06:24:38.543641777Z" level=info msg="Started container" PID=1804 containerID=2391ccb16a41baf6874b7001b4ce1302fe76bd9c37f0aa3d9209904f2376550f description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-z4ftf/dashboard-metrics-scraper id=f1eed39d-00c3-416b-a124-eb81c0d34372 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7d3a58180bdddaa696fb089e4fe37cd68a58d1a7691e1842d9834113b65e8f6e
	Dec 10 06:24:38 old-k8s-version-424086 crio[568]: time="2025-12-10T06:24:38.631086935Z" level=info msg="Removing container: 7e608e8cdebfc1bbc8881003e7f9c27666867615796eb6fb4156af64286395ac" id=2dcaa6b1-85d8-42b8-8e7a-95ecf3d14fef name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 10 06:24:38 old-k8s-version-424086 crio[568]: time="2025-12-10T06:24:38.646860493Z" level=info msg="Removed container 7e608e8cdebfc1bbc8881003e7f9c27666867615796eb6fb4156af64286395ac: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-z4ftf/dashboard-metrics-scraper" id=2dcaa6b1-85d8-42b8-8e7a-95ecf3d14fef name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	2391ccb16a41b       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           16 seconds ago      Exited              dashboard-metrics-scraper   2                   7d3a58180bddd       dashboard-metrics-scraper-5f989dc9cf-z4ftf       kubernetes-dashboard
	914c4088df00c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           19 seconds ago      Running             storage-provisioner         1                   9154e018aebbe       storage-provisioner                              kube-system
	8b02c6ca7d446       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   31 seconds ago      Running             kubernetes-dashboard        0                   f5117b4231bce       kubernetes-dashboard-8694d4445c-gwx7s            kubernetes-dashboard
	35bfd69509044       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           50 seconds ago      Running             busybox                     1                   8c4f119a7ee59       busybox                                          default
	a64b25c87547a       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           50 seconds ago      Running             coredns                     0                   f55d620ff6ebf       coredns-5dd5756b68-gmssk                         kube-system
	1a8811723167f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           50 seconds ago      Exited              storage-provisioner         0                   9154e018aebbe       storage-provisioner                              kube-system
	b21b5007f34e2       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           50 seconds ago      Running             kube-proxy                  0                   2b0615e879c66       kube-proxy-v9pgf                                 kube-system
	d5ff7c07b23bb       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           50 seconds ago      Running             kindnet-cni                 0                   e39750f1f0152       kindnet-2qg8n                                    kube-system
	93a32a0fa3cab       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           52 seconds ago      Running             kube-apiserver              0                   a7c3f63289700       kube-apiserver-old-k8s-version-424086            kube-system
	99a520617b270       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           52 seconds ago      Running             kube-scheduler              0                   eb99bbb6cc2cf       kube-scheduler-old-k8s-version-424086            kube-system
	70b526ae1f4ce       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           52 seconds ago      Running             etcd                        0                   921d5c4220b08       etcd-old-k8s-version-424086                      kube-system
	eca25d4da6553       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           52 seconds ago      Running             kube-controller-manager     0                   e1b5ec6584f8c       kube-controller-manager-old-k8s-version-424086   kube-system
	
	
	==> coredns [a64b25c87547a694a7859016b2ba1fcc83c7b299676d2b8c2fcf983aafc02a6a] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:47268 - 45549 "HINFO IN 8003849988303824958.2538073435513461362. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.054953365s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-424086
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-424086
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=edc6abd3c0573b88c7a02dc35aa0b985627fa3e9
	                    minikube.k8s.io/name=old-k8s-version-424086
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T06_22_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 06:22:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-424086
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 06:24:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 06:24:34 +0000   Wed, 10 Dec 2025 06:22:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 06:24:34 +0000   Wed, 10 Dec 2025 06:22:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 06:24:34 +0000   Wed, 10 Dec 2025 06:22:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Dec 2025 06:24:34 +0000   Wed, 10 Dec 2025 06:23:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-424086
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 0992b7e47f4f804d2f02c3066938a460
	  System UUID:                e81e4360-349b-45ab-b112-f9ed8c9c5eab
	  Boot ID:                    cce7104c-1270-4b6b-af66-b04ce0de633c
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 coredns-5dd5756b68-gmssk                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     103s
	  kube-system                 etcd-old-k8s-version-424086                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         116s
	  kube-system                 kindnet-2qg8n                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      103s
	  kube-system                 kube-apiserver-old-k8s-version-424086             250m (3%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-controller-manager-old-k8s-version-424086    200m (2%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-proxy-v9pgf                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 kube-scheduler-old-k8s-version-424086             100m (1%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-z4ftf        0 (0%)        0 (0%)      0 (0%)           0 (0%)         38s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-gwx7s             0 (0%)        0 (0%)      0 (0%)           0 (0%)         38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 102s                 kube-proxy       
	  Normal  Starting                 50s                  kube-proxy       
	  Normal  Starting                 2m2s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m2s (x8 over 2m2s)  kubelet          Node old-k8s-version-424086 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m2s (x8 over 2m2s)  kubelet          Node old-k8s-version-424086 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m2s (x8 over 2m2s)  kubelet          Node old-k8s-version-424086 status is now: NodeHasSufficientPID
	  Normal  Starting                 117s                 kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    116s                 kubelet          Node old-k8s-version-424086 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     116s                 kubelet          Node old-k8s-version-424086 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  116s                 kubelet          Node old-k8s-version-424086 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           104s                 node-controller  Node old-k8s-version-424086 event: Registered Node old-k8s-version-424086 in Controller
	  Normal  NodeReady                90s                  kubelet          Node old-k8s-version-424086 status is now: NodeReady
	  Normal  Starting                 53s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  53s (x8 over 53s)    kubelet          Node old-k8s-version-424086 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    53s (x8 over 53s)    kubelet          Node old-k8s-version-424086 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     53s (x8 over 53s)    kubelet          Node old-k8s-version-424086 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           38s                  node-controller  Node old-k8s-version-424086 event: Registered Node old-k8s-version-424086 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 67 21 4d 5d 12 08 06
	[Dec10 06:21] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 7e b1 cc cb 4a c1 08 06
	[  +0.000451] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f6 67 21 4d 5d 12 08 06
	[ +47.984386] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 4e cf 84 e2 64 08 06
	[  +1.136322] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000001] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0e cf a5 c8 c4 7c 08 06
	[  +0.000001] ll header: 00000000: ff ff ff ff ff ff 4e 6d f9 62 31 99 08 06
	[Dec10 06:22] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 16 a6 95 b8 c2 8a 08 06
	[ +10.598490] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 42 35 90 e5 6e e9 08 06
	[  +0.000401] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 36 4e cf 84 e2 64 08 06
	[ +28.872835] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a 53 b5 51 38 03 08 06
	[  +0.000413] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4e 6d f9 62 31 99 08 06
	[  +9.820727] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6e c5 0b 85 ba 10 08 06
	[  +0.000485] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 16 a6 95 b8 c2 8a 08 06
	
	
	==> etcd [70b526ae1f4ce1d3bdeff2ca86e39c33688d70edf03a257a1b0eeda29e7059a9] <==
	{"level":"info","ts":"2025-12-10T06:24:02.10327Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-12-10T06:24:02.109079Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-10T06:24:02.109345Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-10T06:24:02.109392Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-10T06:24:02.109509Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-10T06:24:02.109526Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-10T06:24:03.095231Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-10T06:24:03.095277Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-10T06:24:03.095323Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-10T06:24:03.095337Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-12-10T06:24:03.095342Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-10T06:24:03.095351Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-12-10T06:24:03.095359Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-10T06:24:03.098176Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-10T06:24:03.098197Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-424086 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-10T06:24:03.098202Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-10T06:24:03.098509Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-10T06:24:03.098533Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-10T06:24:03.099568Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-12-10T06:24:03.099569Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-10T06:24:34.798057Z","caller":"traceutil/trace.go:171","msg":"trace[446660794] linearizableReadLoop","detail":"{readStateIndex:662; appliedIndex:661; }","duration":"107.846212ms","start":"2025-12-10T06:24:34.690187Z","end":"2025-12-10T06:24:34.798033Z","steps":["trace[446660794] 'read index received'  (duration: 107.654594ms)","trace[446660794] 'applied index is now lower than readState.Index'  (duration: 190.919µs)"],"step_count":2}
	{"level":"info","ts":"2025-12-10T06:24:34.798182Z","caller":"traceutil/trace.go:171","msg":"trace[1125219541] transaction","detail":"{read_only:false; response_revision:633; number_of_response:1; }","duration":"159.709928ms","start":"2025-12-10T06:24:34.638435Z","end":"2025-12-10T06:24:34.798145Z","steps":["trace[1125219541] 'process raft request'  (duration: 159.437493ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-10T06:24:34.798216Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.02688ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-5dd5756b68-gmssk\" ","response":"range_response_count:1 size:4991"}
	{"level":"info","ts":"2025-12-10T06:24:34.79827Z","caller":"traceutil/trace.go:171","msg":"trace[1538081865] range","detail":"{range_begin:/registry/pods/kube-system/coredns-5dd5756b68-gmssk; range_end:; response_count:1; response_revision:633; }","duration":"108.103441ms","start":"2025-12-10T06:24:34.690157Z","end":"2025-12-10T06:24:34.79826Z","steps":["trace[1538081865] 'agreement among raft nodes before linearized reading'  (duration: 107.969386ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-10T06:24:34.977325Z","caller":"traceutil/trace.go:171","msg":"trace[440854227] transaction","detail":"{read_only:false; response_revision:634; number_of_response:1; }","duration":"153.68203ms","start":"2025-12-10T06:24:34.823614Z","end":"2025-12-10T06:24:34.977296Z","steps":["trace[440854227] 'process raft request'  (duration: 84.563996ms)","trace[440854227] 'compare'  (duration: 68.946995ms)"],"step_count":2}
	
	
	==> kernel <==
	 06:24:55 up  1:07,  0 user,  load average: 5.28, 4.93, 3.03
	Linux old-k8s-version-424086 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d5ff7c07b23bb6e013e976a59d08c0963394c1d3c83054f617318b04962837f7] <==
	I1210 06:24:05.023685       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1210 06:24:05.023950       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1210 06:24:05.024120       1 main.go:148] setting mtu 1500 for CNI 
	I1210 06:24:05.024144       1 main.go:178] kindnetd IP family: "ipv4"
	I1210 06:24:05.024178       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-10T06:24:05Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1210 06:24:05.322836       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1210 06:24:05.322948       1 controller.go:381] "Waiting for informer caches to sync"
	I1210 06:24:05.322974       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1210 06:24:05.420078       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1210 06:24:05.723093       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1210 06:24:05.723125       1 metrics.go:72] Registering metrics
	I1210 06:24:05.723177       1 controller.go:711] "Syncing nftables rules"
	I1210 06:24:15.323347       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1210 06:24:15.323417       1 main.go:301] handling current node
	I1210 06:24:25.323590       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1210 06:24:25.323663       1 main.go:301] handling current node
	I1210 06:24:35.323245       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1210 06:24:35.323313       1 main.go:301] handling current node
	I1210 06:24:45.328539       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1210 06:24:45.328586       1 main.go:301] handling current node
	
	
	==> kube-apiserver [93a32a0fa3cab7bf6ae2839ea587c0222d752c39fb0442b5594fc8fb840385c5] <==
	I1210 06:24:04.135392       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1210 06:24:04.135436       1 aggregator.go:166] initial CRD sync complete...
	I1210 06:24:04.135443       1 autoregister_controller.go:141] Starting autoregister controller
	I1210 06:24:04.135450       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1210 06:24:04.135456       1 cache.go:39] Caches are synced for autoregister controller
	I1210 06:24:04.135508       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1210 06:24:04.135536       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1210 06:24:04.135653       1 shared_informer.go:318] Caches are synced for configmaps
	I1210 06:24:04.136168       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1210 06:24:04.136186       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1210 06:24:04.136200       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	E1210 06:24:04.143529       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1210 06:24:04.182434       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1210 06:24:04.994689       1 controller.go:624] quota admission added evaluator for: namespaces
	I1210 06:24:05.034920       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1210 06:24:05.040847       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1210 06:24:05.057662       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1210 06:24:05.067613       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1210 06:24:05.078434       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1210 06:24:05.121601       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.48.223"}
	I1210 06:24:05.136609       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.149.231"}
	I1210 06:24:16.463205       1 controller.go:624] quota admission added evaluator for: endpoints
	I1210 06:24:16.616098       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1210 06:24:16.664830       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1210 06:24:16.664832       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [eca25d4da655329c0f900bc2d9a38df2f8b3abd27a1fb23973129f968c2ffbea] <==
	I1210 06:24:16.535436       1 shared_informer.go:318] Caches are synced for resource quota
	I1210 06:24:16.668577       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1210 06:24:16.669042       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1210 06:24:16.678288       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-z4ftf"
	I1210 06:24:16.678317       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-gwx7s"
	I1210 06:24:16.687502       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="19.323678ms"
	I1210 06:24:16.688427       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="19.814699ms"
	I1210 06:24:16.698719       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="11.092791ms"
	I1210 06:24:16.698952       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="10.474613ms"
	I1210 06:24:16.699212       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="178.556µs"
	I1210 06:24:16.711910       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="13.094141ms"
	I1210 06:24:16.712157       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="102.266µs"
	I1210 06:24:16.719501       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="95.035µs"
	I1210 06:24:16.854766       1 shared_informer.go:318] Caches are synced for garbage collector
	I1210 06:24:16.906985       1 shared_informer.go:318] Caches are synced for garbage collector
	I1210 06:24:16.907019       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1210 06:24:19.572818       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="70.78µs"
	I1210 06:24:20.577285       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="71.339µs"
	I1210 06:24:21.579250       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="90.719µs"
	I1210 06:24:23.593708       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="7.187336ms"
	I1210 06:24:23.593830       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="74.605µs"
	I1210 06:24:35.970136       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="17.821046ms"
	I1210 06:24:35.970346       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="76.542µs"
	I1210 06:24:38.648531       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="105.06µs"
	I1210 06:24:47.005831       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="69.965µs"
	
	
	==> kube-proxy [b21b5007f34e2df91ee40c8acf976b58a08736cb563430c576aebb7a80a57bd7] <==
	I1210 06:24:04.908319       1 server_others.go:69] "Using iptables proxy"
	I1210 06:24:04.919148       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1210 06:24:04.939909       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1210 06:24:04.942449       1 server_others.go:152] "Using iptables Proxier"
	I1210 06:24:04.942504       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1210 06:24:04.942515       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1210 06:24:04.942553       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1210 06:24:04.942800       1 server.go:846] "Version info" version="v1.28.0"
	I1210 06:24:04.942813       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 06:24:04.943411       1 config.go:188] "Starting service config controller"
	I1210 06:24:04.943441       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1210 06:24:04.943463       1 config.go:315] "Starting node config controller"
	I1210 06:24:04.943479       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1210 06:24:04.943561       1 config.go:97] "Starting endpoint slice config controller"
	I1210 06:24:04.943587       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1210 06:24:05.043668       1 shared_informer.go:318] Caches are synced for node config
	I1210 06:24:05.043679       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1210 06:24:05.043726       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [99a520617b27091388284c36bef3465458e40aa0ab841df386ee409f39ccbee2] <==
	I1210 06:24:02.491764       1 serving.go:348] Generated self-signed cert in-memory
	W1210 06:24:04.087740       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1210 06:24:04.087902       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1210 06:24:04.087969       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1210 06:24:04.088001       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1210 06:24:04.106402       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1210 06:24:04.106436       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 06:24:04.108121       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 06:24:04.108159       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1210 06:24:04.109076       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1210 06:24:04.109329       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1210 06:24:04.209249       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 10 06:24:16 old-k8s-version-424086 kubelet[735]: I1210 06:24:16.765521     735 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbz78\" (UniqueName: \"kubernetes.io/projected/8f128d15-7745-4c29-bb40-04e58c18e98c-kube-api-access-rbz78\") pod \"dashboard-metrics-scraper-5f989dc9cf-z4ftf\" (UID: \"8f128d15-7745-4c29-bb40-04e58c18e98c\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-z4ftf"
	Dec 10 06:24:16 old-k8s-version-424086 kubelet[735]: I1210 06:24:16.765600     735 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/8f128d15-7745-4c29-bb40-04e58c18e98c-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-z4ftf\" (UID: \"8f128d15-7745-4c29-bb40-04e58c18e98c\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-z4ftf"
	Dec 10 06:24:16 old-k8s-version-424086 kubelet[735]: I1210 06:24:16.765627     735 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m994j\" (UniqueName: \"kubernetes.io/projected/3e9c8ba6-46d4-4305-9a87-ffc54ec95c34-kube-api-access-m994j\") pod \"kubernetes-dashboard-8694d4445c-gwx7s\" (UID: \"3e9c8ba6-46d4-4305-9a87-ffc54ec95c34\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-gwx7s"
	Dec 10 06:24:16 old-k8s-version-424086 kubelet[735]: I1210 06:24:16.765650     735 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/3e9c8ba6-46d4-4305-9a87-ffc54ec95c34-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-gwx7s\" (UID: \"3e9c8ba6-46d4-4305-9a87-ffc54ec95c34\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-gwx7s"
	Dec 10 06:24:19 old-k8s-version-424086 kubelet[735]: I1210 06:24:19.560197     735 scope.go:117] "RemoveContainer" containerID="9dc3ed6a9254e3a9c83fc19a222564294ae546fcf8722d731a8a1b16ab52311a"
	Dec 10 06:24:20 old-k8s-version-424086 kubelet[735]: I1210 06:24:20.564716     735 scope.go:117] "RemoveContainer" containerID="9dc3ed6a9254e3a9c83fc19a222564294ae546fcf8722d731a8a1b16ab52311a"
	Dec 10 06:24:20 old-k8s-version-424086 kubelet[735]: I1210 06:24:20.564994     735 scope.go:117] "RemoveContainer" containerID="7e608e8cdebfc1bbc8881003e7f9c27666867615796eb6fb4156af64286395ac"
	Dec 10 06:24:20 old-k8s-version-424086 kubelet[735]: E1210 06:24:20.565359     735 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-z4ftf_kubernetes-dashboard(8f128d15-7745-4c29-bb40-04e58c18e98c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-z4ftf" podUID="8f128d15-7745-4c29-bb40-04e58c18e98c"
	Dec 10 06:24:21 old-k8s-version-424086 kubelet[735]: I1210 06:24:21.568491     735 scope.go:117] "RemoveContainer" containerID="7e608e8cdebfc1bbc8881003e7f9c27666867615796eb6fb4156af64286395ac"
	Dec 10 06:24:21 old-k8s-version-424086 kubelet[735]: E1210 06:24:21.568878     735 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-z4ftf_kubernetes-dashboard(8f128d15-7745-4c29-bb40-04e58c18e98c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-z4ftf" podUID="8f128d15-7745-4c29-bb40-04e58c18e98c"
	Dec 10 06:24:23 old-k8s-version-424086 kubelet[735]: I1210 06:24:23.586792     735 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-gwx7s" podStartSLOduration=1.465063555 podCreationTimestamp="2025-12-10 06:24:16 +0000 UTC" firstStartedPulling="2025-12-10 06:24:17.017570258 +0000 UTC m=+15.618253228" lastFinishedPulling="2025-12-10 06:24:23.139223685 +0000 UTC m=+21.739906673" observedRunningTime="2025-12-10 06:24:23.586299045 +0000 UTC m=+22.186982030" watchObservedRunningTime="2025-12-10 06:24:23.586717 +0000 UTC m=+22.187399984"
	Dec 10 06:24:26 old-k8s-version-424086 kubelet[735]: I1210 06:24:26.992282     735 scope.go:117] "RemoveContainer" containerID="7e608e8cdebfc1bbc8881003e7f9c27666867615796eb6fb4156af64286395ac"
	Dec 10 06:24:26 old-k8s-version-424086 kubelet[735]: E1210 06:24:26.992773     735 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-z4ftf_kubernetes-dashboard(8f128d15-7745-4c29-bb40-04e58c18e98c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-z4ftf" podUID="8f128d15-7745-4c29-bb40-04e58c18e98c"
	Dec 10 06:24:35 old-k8s-version-424086 kubelet[735]: I1210 06:24:35.615410     735 scope.go:117] "RemoveContainer" containerID="1a8811723167fa6947da5975aed1032d246a1439e70ddd047ab355bb354799c3"
	Dec 10 06:24:38 old-k8s-version-424086 kubelet[735]: I1210 06:24:38.492016     735 scope.go:117] "RemoveContainer" containerID="7e608e8cdebfc1bbc8881003e7f9c27666867615796eb6fb4156af64286395ac"
	Dec 10 06:24:38 old-k8s-version-424086 kubelet[735]: I1210 06:24:38.629814     735 scope.go:117] "RemoveContainer" containerID="7e608e8cdebfc1bbc8881003e7f9c27666867615796eb6fb4156af64286395ac"
	Dec 10 06:24:38 old-k8s-version-424086 kubelet[735]: I1210 06:24:38.630067     735 scope.go:117] "RemoveContainer" containerID="2391ccb16a41baf6874b7001b4ce1302fe76bd9c37f0aa3d9209904f2376550f"
	Dec 10 06:24:38 old-k8s-version-424086 kubelet[735]: E1210 06:24:38.630454     735 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-z4ftf_kubernetes-dashboard(8f128d15-7745-4c29-bb40-04e58c18e98c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-z4ftf" podUID="8f128d15-7745-4c29-bb40-04e58c18e98c"
	Dec 10 06:24:46 old-k8s-version-424086 kubelet[735]: I1210 06:24:46.992919     735 scope.go:117] "RemoveContainer" containerID="2391ccb16a41baf6874b7001b4ce1302fe76bd9c37f0aa3d9209904f2376550f"
	Dec 10 06:24:46 old-k8s-version-424086 kubelet[735]: E1210 06:24:46.993362     735 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-z4ftf_kubernetes-dashboard(8f128d15-7745-4c29-bb40-04e58c18e98c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-z4ftf" podUID="8f128d15-7745-4c29-bb40-04e58c18e98c"
	Dec 10 06:24:50 old-k8s-version-424086 kubelet[735]: I1210 06:24:50.165083     735 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 10 06:24:50 old-k8s-version-424086 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 10 06:24:50 old-k8s-version-424086 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 10 06:24:50 old-k8s-version-424086 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:24:50 old-k8s-version-424086 systemd[1]: kubelet.service: Consumed 1.561s CPU time.
	
	
	==> kubernetes-dashboard [8b02c6ca7d4466db7f6c782b5cef77cc7d1b41833fc02837b2fbfa4014dcd4dc] <==
	2025/12/10 06:24:23 Starting overwatch
	2025/12/10 06:24:23 Using namespace: kubernetes-dashboard
	2025/12/10 06:24:23 Using in-cluster config to connect to apiserver
	2025/12/10 06:24:23 Using secret token for csrf signing
	2025/12/10 06:24:23 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/10 06:24:23 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/10 06:24:23 Successful initial request to the apiserver, version: v1.28.0
	2025/12/10 06:24:23 Generating JWE encryption key
	2025/12/10 06:24:23 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/10 06:24:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/10 06:24:23 Initializing JWE encryption key from synchronized object
	2025/12/10 06:24:23 Creating in-cluster Sidecar client
	2025/12/10 06:24:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/10 06:24:23 Serving insecurely on HTTP port: 9090
	2025/12/10 06:24:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [1a8811723167fa6947da5975aed1032d246a1439e70ddd047ab355bb354799c3] <==
	I1210 06:24:04.872822       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1210 06:24:34.875636       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [914c4088df00c31f369bdfe0e192e6636063078e58e9ec66a664954130a9142a] <==
	I1210 06:24:35.704910       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1210 06:24:35.722237       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1210 06:24:35.722385       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1210 06:24:53.125322       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1210 06:24:53.125385       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4180df8f-51ab-47df-91f5-dd51db49c438", APIVersion:"v1", ResourceVersion:"656", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-424086_ba7bab8b-2a55-4bc7-8ced-f67781aaf0f4 became leader
	I1210 06:24:53.125488       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-424086_ba7bab8b-2a55-4bc7-8ced-f67781aaf0f4!
	I1210 06:24:53.226554       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-424086_ba7bab8b-2a55-4bc7-8ced-f67781aaf0f4!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-424086 -n old-k8s-version-424086
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-424086 -n old-k8s-version-424086: exit status 2 (351.746931ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-424086 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (6.52s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-713838 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-713838 --alsologtostderr -v=1: exit status 80 (2.281210868s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-713838 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 06:25:13.682511  339419 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:25:13.682624  339419 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:25:13.682633  339419 out.go:374] Setting ErrFile to fd 2...
	I1210 06:25:13.682638  339419 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:25:13.682851  339419 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8832/.minikube/bin
	I1210 06:25:13.683069  339419 out.go:368] Setting JSON to false
	I1210 06:25:13.683085  339419 mustload.go:66] Loading cluster: no-preload-713838
	I1210 06:25:13.683425  339419 config.go:182] Loaded profile config "no-preload-713838": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 06:25:13.683830  339419 cli_runner.go:164] Run: docker container inspect no-preload-713838 --format={{.State.Status}}
	I1210 06:25:13.703155  339419 host.go:66] Checking if "no-preload-713838" exists ...
	I1210 06:25:13.703451  339419 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:25:13.766071  339419 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:85 SystemTime:2025-12-10 06:25:13.754129596 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:25:13.767089  339419 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765151505-21409/minikube-v1.37.0-1765151505-21409-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765151505-21409-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-713838 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1210 06:25:13.769208  339419 out.go:179] * Pausing node no-preload-713838 ... 
	I1210 06:25:13.770646  339419 host.go:66] Checking if "no-preload-713838" exists ...
	I1210 06:25:13.770982  339419 ssh_runner.go:195] Run: systemctl --version
	I1210 06:25:13.771037  339419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-713838
	I1210 06:25:13.799186  339419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/no-preload-713838/id_rsa Username:docker}
	I1210 06:25:13.895702  339419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:25:13.908837  339419 pause.go:52] kubelet running: true
	I1210 06:25:13.908914  339419 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1210 06:25:14.091533  339419 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1210 06:25:14.091636  339419 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1210 06:25:14.162559  339419 cri.go:89] found id: "a85f655104f8de9cbb4bfafb1587d4e6d12c001b4413e2dded406cb8f4a9411a"
	I1210 06:25:14.162591  339419 cri.go:89] found id: "b8b7bbd3a73fd38688e69778eb82aaaf4c797868eabe1e3f152394098b131417"
	I1210 06:25:14.162598  339419 cri.go:89] found id: "c9af406a71bb6827180c265a5825986d235be10e8202bfcd28bd70a363cd3945"
	I1210 06:25:14.162604  339419 cri.go:89] found id: "8b0ce37d641b86571cb5fe3e7bea6acb3968e201d71d2f8e58691e954136608d"
	I1210 06:25:14.162609  339419 cri.go:89] found id: "1674e78c0eb170c33491a539a64875481799478be07a401ad67fa61986708cd8"
	I1210 06:25:14.162614  339419 cri.go:89] found id: "7a81e637bcbb822a79d3c9d17ceb44a480481cefb6fd1bd5c4f5c51620d65578"
	I1210 06:25:14.162619  339419 cri.go:89] found id: "626d40c34f5a9cc949c8b0c13c01036e5fa575714b2210e614b23214089d41e2"
	I1210 06:25:14.162624  339419 cri.go:89] found id: "7e3a6ab1e6a60502ba96ff6d0a9a8e22ac37e396772330314f4ad7f55de8b26b"
	I1210 06:25:14.162628  339419 cri.go:89] found id: "352c8f0e348fa006abc84878109c4605c54ea03f96f88b48143b6b659f4b95cb"
	I1210 06:25:14.162641  339419 cri.go:89] found id: "8cf7653df43bfb019a879555ed0b3523ed1144c91d66cad14bf2f900672b3e97"
	I1210 06:25:14.162648  339419 cri.go:89] found id: "1323310fa79d97e897f48248a3271e2d891ea27849df186d92211b9ee5b46f18"
	I1210 06:25:14.162650  339419 cri.go:89] found id: ""
	I1210 06:25:14.162692  339419 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 06:25:14.176179  339419 retry.go:31] will retry after 201.237806ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:25:14Z" level=error msg="open /run/runc: no such file or directory"
	I1210 06:25:14.377616  339419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:25:14.394861  339419 pause.go:52] kubelet running: false
	I1210 06:25:14.394914  339419 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1210 06:25:14.546000  339419 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1210 06:25:14.546098  339419 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1210 06:25:14.626893  339419 cri.go:89] found id: "a85f655104f8de9cbb4bfafb1587d4e6d12c001b4413e2dded406cb8f4a9411a"
	I1210 06:25:14.626916  339419 cri.go:89] found id: "b8b7bbd3a73fd38688e69778eb82aaaf4c797868eabe1e3f152394098b131417"
	I1210 06:25:14.626921  339419 cri.go:89] found id: "c9af406a71bb6827180c265a5825986d235be10e8202bfcd28bd70a363cd3945"
	I1210 06:25:14.626926  339419 cri.go:89] found id: "8b0ce37d641b86571cb5fe3e7bea6acb3968e201d71d2f8e58691e954136608d"
	I1210 06:25:14.626931  339419 cri.go:89] found id: "1674e78c0eb170c33491a539a64875481799478be07a401ad67fa61986708cd8"
	I1210 06:25:14.626937  339419 cri.go:89] found id: "7a81e637bcbb822a79d3c9d17ceb44a480481cefb6fd1bd5c4f5c51620d65578"
	I1210 06:25:14.626941  339419 cri.go:89] found id: "626d40c34f5a9cc949c8b0c13c01036e5fa575714b2210e614b23214089d41e2"
	I1210 06:25:14.626946  339419 cri.go:89] found id: "7e3a6ab1e6a60502ba96ff6d0a9a8e22ac37e396772330314f4ad7f55de8b26b"
	I1210 06:25:14.626951  339419 cri.go:89] found id: "352c8f0e348fa006abc84878109c4605c54ea03f96f88b48143b6b659f4b95cb"
	I1210 06:25:14.626960  339419 cri.go:89] found id: "8cf7653df43bfb019a879555ed0b3523ed1144c91d66cad14bf2f900672b3e97"
	I1210 06:25:14.626970  339419 cri.go:89] found id: "1323310fa79d97e897f48248a3271e2d891ea27849df186d92211b9ee5b46f18"
	I1210 06:25:14.626974  339419 cri.go:89] found id: ""
	I1210 06:25:14.627027  339419 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 06:25:14.639953  339419 retry.go:31] will retry after 366.279946ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:25:14Z" level=error msg="open /run/runc: no such file or directory"
	I1210 06:25:15.006535  339419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:25:15.020948  339419 pause.go:52] kubelet running: false
	I1210 06:25:15.021001  339419 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1210 06:25:15.197850  339419 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1210 06:25:15.197940  339419 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1210 06:25:15.282858  339419 cri.go:89] found id: "a85f655104f8de9cbb4bfafb1587d4e6d12c001b4413e2dded406cb8f4a9411a"
	I1210 06:25:15.282883  339419 cri.go:89] found id: "b8b7bbd3a73fd38688e69778eb82aaaf4c797868eabe1e3f152394098b131417"
	I1210 06:25:15.282888  339419 cri.go:89] found id: "c9af406a71bb6827180c265a5825986d235be10e8202bfcd28bd70a363cd3945"
	I1210 06:25:15.282892  339419 cri.go:89] found id: "8b0ce37d641b86571cb5fe3e7bea6acb3968e201d71d2f8e58691e954136608d"
	I1210 06:25:15.282895  339419 cri.go:89] found id: "1674e78c0eb170c33491a539a64875481799478be07a401ad67fa61986708cd8"
	I1210 06:25:15.282898  339419 cri.go:89] found id: "7a81e637bcbb822a79d3c9d17ceb44a480481cefb6fd1bd5c4f5c51620d65578"
	I1210 06:25:15.282901  339419 cri.go:89] found id: "626d40c34f5a9cc949c8b0c13c01036e5fa575714b2210e614b23214089d41e2"
	I1210 06:25:15.282905  339419 cri.go:89] found id: "7e3a6ab1e6a60502ba96ff6d0a9a8e22ac37e396772330314f4ad7f55de8b26b"
	I1210 06:25:15.282908  339419 cri.go:89] found id: "352c8f0e348fa006abc84878109c4605c54ea03f96f88b48143b6b659f4b95cb"
	I1210 06:25:15.282931  339419 cri.go:89] found id: "8cf7653df43bfb019a879555ed0b3523ed1144c91d66cad14bf2f900672b3e97"
	I1210 06:25:15.282936  339419 cri.go:89] found id: "1323310fa79d97e897f48248a3271e2d891ea27849df186d92211b9ee5b46f18"
	I1210 06:25:15.282948  339419 cri.go:89] found id: ""
	I1210 06:25:15.282995  339419 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 06:25:15.297213  339419 retry.go:31] will retry after 320.082936ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:25:15Z" level=error msg="open /run/runc: no such file or directory"
	I1210 06:25:15.617607  339419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:25:15.637196  339419 pause.go:52] kubelet running: false
	I1210 06:25:15.637258  339419 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1210 06:25:15.793985  339419 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1210 06:25:15.794060  339419 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1210 06:25:15.876737  339419 cri.go:89] found id: "a85f655104f8de9cbb4bfafb1587d4e6d12c001b4413e2dded406cb8f4a9411a"
	I1210 06:25:15.876760  339419 cri.go:89] found id: "b8b7bbd3a73fd38688e69778eb82aaaf4c797868eabe1e3f152394098b131417"
	I1210 06:25:15.876767  339419 cri.go:89] found id: "c9af406a71bb6827180c265a5825986d235be10e8202bfcd28bd70a363cd3945"
	I1210 06:25:15.876772  339419 cri.go:89] found id: "8b0ce37d641b86571cb5fe3e7bea6acb3968e201d71d2f8e58691e954136608d"
	I1210 06:25:15.876777  339419 cri.go:89] found id: "1674e78c0eb170c33491a539a64875481799478be07a401ad67fa61986708cd8"
	I1210 06:25:15.876782  339419 cri.go:89] found id: "7a81e637bcbb822a79d3c9d17ceb44a480481cefb6fd1bd5c4f5c51620d65578"
	I1210 06:25:15.876787  339419 cri.go:89] found id: "626d40c34f5a9cc949c8b0c13c01036e5fa575714b2210e614b23214089d41e2"
	I1210 06:25:15.876791  339419 cri.go:89] found id: "7e3a6ab1e6a60502ba96ff6d0a9a8e22ac37e396772330314f4ad7f55de8b26b"
	I1210 06:25:15.876795  339419 cri.go:89] found id: "352c8f0e348fa006abc84878109c4605c54ea03f96f88b48143b6b659f4b95cb"
	I1210 06:25:15.876803  339419 cri.go:89] found id: "8cf7653df43bfb019a879555ed0b3523ed1144c91d66cad14bf2f900672b3e97"
	I1210 06:25:15.876816  339419 cri.go:89] found id: "1323310fa79d97e897f48248a3271e2d891ea27849df186d92211b9ee5b46f18"
	I1210 06:25:15.876821  339419 cri.go:89] found id: ""
	I1210 06:25:15.876858  339419 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 06:25:15.892888  339419 out.go:203] 
	W1210 06:25:15.894251  339419 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:25:15Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:25:15Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 06:25:15.894275  339419 out.go:285] * 
	* 
	W1210 06:25:15.898554  339419 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 06:25:15.900166  339419 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p no-preload-713838 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-713838
helpers_test.go:244: (dbg) docker inspect no-preload-713838:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4a9af4b439c2bab76cdd83fb5b3fc2cdad65b17f7ccbe3c7f3909b3e503a9bb2",
	        "Created": "2025-12-10T06:22:56.695408224Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 327163,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T06:24:17.801249327Z",
	            "FinishedAt": "2025-12-10T06:24:16.706896856Z"
	        },
	        "Image": "sha256:9dfcc37acf4d8ed51daae49d651516447e95ced4bb0b0783e8c53cb79a74f008",
	        "ResolvConfPath": "/var/lib/docker/containers/4a9af4b439c2bab76cdd83fb5b3fc2cdad65b17f7ccbe3c7f3909b3e503a9bb2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4a9af4b439c2bab76cdd83fb5b3fc2cdad65b17f7ccbe3c7f3909b3e503a9bb2/hostname",
	        "HostsPath": "/var/lib/docker/containers/4a9af4b439c2bab76cdd83fb5b3fc2cdad65b17f7ccbe3c7f3909b3e503a9bb2/hosts",
	        "LogPath": "/var/lib/docker/containers/4a9af4b439c2bab76cdd83fb5b3fc2cdad65b17f7ccbe3c7f3909b3e503a9bb2/4a9af4b439c2bab76cdd83fb5b3fc2cdad65b17f7ccbe3c7f3909b3e503a9bb2-json.log",
	        "Name": "/no-preload-713838",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-713838:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-713838",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4a9af4b439c2bab76cdd83fb5b3fc2cdad65b17f7ccbe3c7f3909b3e503a9bb2",
	                "LowerDir": "/var/lib/docker/overlay2/6547a92011e88654ac2d53d62edbbe331cd1387dcdf27af48e639e84ea20cdad-init/diff:/var/lib/docker/overlay2/5745aee6e8b05b3a4cc4ad6aee891df9d6438d830895f70bd2a764a976802708/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6547a92011e88654ac2d53d62edbbe331cd1387dcdf27af48e639e84ea20cdad/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6547a92011e88654ac2d53d62edbbe331cd1387dcdf27af48e639e84ea20cdad/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6547a92011e88654ac2d53d62edbbe331cd1387dcdf27af48e639e84ea20cdad/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-713838",
	                "Source": "/var/lib/docker/volumes/no-preload-713838/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-713838",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-713838",
	                "name.minikube.sigs.k8s.io": "no-preload-713838",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "795e8ad253d7bee2f68d1d6fb5f76e044f53900c178ca76f6287af15c798f873",
	            "SandboxKey": "/var/run/docker/netns/795e8ad253d7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33120"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33123"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33121"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33122"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-713838": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8987097bf8a19a968989f80c7ad4a35d96813c7e6580ac101cba37c806b19e54",
	                    "EndpointID": "bdcc53a3bf749ba7f4501d45df356f14f95abd5410a67179fa7ca326ce698e81",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "9e:3b:de:f9:af:39",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-713838",
	                        "4a9af4b439c2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-713838 -n no-preload-713838
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-713838 -n no-preload-713838: exit status 2 (410.379654ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-713838 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-713838 logs -n 25: (1.223091414s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ delete  │ -p disable-driver-mounts-998062                                                                                                                                                                                                                      │ disable-driver-mounts-998062 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ start   │ -p default-k8s-diff-port-643991 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-643991 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:24 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-424086 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ old-k8s-version-424086       │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ stop    │ -p old-k8s-version-424086 --alsologtostderr -v=3                                                                                                                                                                                                     │ old-k8s-version-424086       │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-424086 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ old-k8s-version-424086       │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ start   │ -p old-k8s-version-424086 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-424086       │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:24 UTC │
	│ addons  │ enable metrics-server -p no-preload-713838 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-713838            │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-133470 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-133470           │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ stop    │ -p no-preload-713838 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-713838            │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:24 UTC │
	│ stop    │ -p embed-certs-133470 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-133470           │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:24 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-643991 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-643991 │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-643991 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-643991 │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:24 UTC │
	│ addons  │ enable dashboard -p no-preload-713838 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-713838            │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:24 UTC │
	│ start   │ -p no-preload-713838 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-713838            │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:25 UTC │
	│ addons  │ enable dashboard -p embed-certs-133470 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-133470           │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:24 UTC │
	│ start   │ -p embed-certs-133470 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-133470           │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:25 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-643991 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-643991 │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:24 UTC │
	│ start   │ -p default-k8s-diff-port-643991 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-643991 │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │                     │
	│ image   │ old-k8s-version-424086 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-424086       │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:24 UTC │
	│ pause   │ -p old-k8s-version-424086 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-424086       │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │                     │
	│ delete  │ -p old-k8s-version-424086                                                                                                                                                                                                                            │ old-k8s-version-424086       │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:24 UTC │
	│ delete  │ -p old-k8s-version-424086                                                                                                                                                                                                                            │ old-k8s-version-424086       │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:24 UTC │
	│ start   │ -p newest-cni-126107 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-126107            │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │                     │
	│ image   │ no-preload-713838 image list --format=json                                                                                                                                                                                                           │ no-preload-713838            │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │ 10 Dec 25 06:25 UTC │
	│ pause   │ -p no-preload-713838 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-713838            │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:24:59
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:24:59.327087  336887 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:24:59.327365  336887 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:24:59.327375  336887 out.go:374] Setting ErrFile to fd 2...
	I1210 06:24:59.327379  336887 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:24:59.327669  336887 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8832/.minikube/bin
	I1210 06:24:59.328143  336887 out.go:368] Setting JSON to false
	I1210 06:24:59.329429  336887 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4050,"bootTime":1765343849,"procs":361,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 06:24:59.329519  336887 start.go:143] virtualization: kvm guest
	I1210 06:24:59.331611  336887 out.go:179] * [newest-cni-126107] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 06:24:59.333096  336887 notify.go:221] Checking for updates...
	I1210 06:24:59.333116  336887 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 06:24:59.334447  336887 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:24:59.336068  336887 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-8832/kubeconfig
	I1210 06:24:59.337494  336887 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-8832/.minikube
	I1210 06:24:59.338960  336887 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 06:24:59.340340  336887 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:24:59.342187  336887 config.go:182] Loaded profile config "default-k8s-diff-port-643991": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 06:24:59.342330  336887 config.go:182] Loaded profile config "embed-certs-133470": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 06:24:59.342492  336887 config.go:182] Loaded profile config "no-preload-713838": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 06:24:59.342623  336887 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:24:59.369242  336887 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 06:24:59.369328  336887 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:24:59.432140  336887 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-10 06:24:59.420604919 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:24:59.432256  336887 docker.go:319] overlay module found
	I1210 06:24:59.435201  336887 out.go:179] * Using the docker driver based on user configuration
	W1210 06:24:55.887075  331193 pod_ready.go:104] pod "coredns-66bc5c9577-znsz6" is not "Ready", error: <nil>
	W1210 06:24:58.386507  331193 pod_ready.go:104] pod "coredns-66bc5c9577-znsz6" is not "Ready", error: <nil>
	I1210 06:24:59.436402  336887 start.go:309] selected driver: docker
	I1210 06:24:59.436415  336887 start.go:927] validating driver "docker" against <nil>
	I1210 06:24:59.436427  336887 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:24:59.436998  336887 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:24:59.496347  336887 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-10 06:24:59.486011226 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:24:59.496517  336887 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1210 06:24:59.496554  336887 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1210 06:24:59.496758  336887 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1210 06:24:59.499173  336887 out.go:179] * Using Docker driver with root privileges
	I1210 06:24:59.500516  336887 cni.go:84] Creating CNI manager for ""
	I1210 06:24:59.500598  336887 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:24:59.500612  336887 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1210 06:24:59.500684  336887 start.go:353] cluster config:
	{Name:newest-cni-126107 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-126107 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:24:59.502093  336887 out.go:179] * Starting "newest-cni-126107" primary control-plane node in "newest-cni-126107" cluster
	I1210 06:24:59.503450  336887 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 06:24:59.504798  336887 out.go:179] * Pulling base image v0.0.48-1765319469-22089 ...
	I1210 06:24:59.506022  336887 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1210 06:24:59.506091  336887 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-8832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1210 06:24:59.506102  336887 cache.go:65] Caching tarball of preloaded images
	I1210 06:24:59.506114  336887 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon
	I1210 06:24:59.506191  336887 preload.go:238] Found /home/jenkins/minikube-integration/22089-8832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 06:24:59.506203  336887 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1210 06:24:59.506300  336887 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/config.json ...
	I1210 06:24:59.506323  336887 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/config.json: {Name:mkdf58f074b298e370024a6ce1eb0198fc1a1932 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:24:59.529599  336887 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon, skipping pull
	I1210 06:24:59.529619  336887 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca exists in daemon, skipping load
	I1210 06:24:59.529645  336887 cache.go:243] Successfully downloaded all kic artifacts
	I1210 06:24:59.529672  336887 start.go:360] acquireMachinesLock for newest-cni-126107: {Name:mk95835e60131d01841dcfa433d5776bf10a491c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:24:59.529766  336887 start.go:364] duration metric: took 78.432µs to acquireMachinesLock for "newest-cni-126107"
	I1210 06:24:59.529787  336887 start.go:93] Provisioning new machine with config: &{Name:newest-cni-126107 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-126107 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 06:24:59.529851  336887 start.go:125] createHost starting for "" (driver="docker")
	W1210 06:24:58.946860  326955 pod_ready.go:104] pod "coredns-7d764666f9-hr4gk" is not "Ready", error: <nil>
	I1210 06:25:00.446892  326955 pod_ready.go:94] pod "coredns-7d764666f9-hr4gk" is "Ready"
	I1210 06:25:00.446917  326955 pod_ready.go:86] duration metric: took 31.006503405s for pod "coredns-7d764666f9-hr4gk" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:00.449783  326955 pod_ready.go:83] waiting for pod "etcd-no-preload-713838" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:00.454644  326955 pod_ready.go:94] pod "etcd-no-preload-713838" is "Ready"
	I1210 06:25:00.454673  326955 pod_ready.go:86] duration metric: took 4.863318ms for pod "etcd-no-preload-713838" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:00.457203  326955 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-713838" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:00.462197  326955 pod_ready.go:94] pod "kube-apiserver-no-preload-713838" is "Ready"
	I1210 06:25:00.462227  326955 pod_ready.go:86] duration metric: took 4.996726ms for pod "kube-apiserver-no-preload-713838" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:00.464859  326955 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-713838" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:00.643687  326955 pod_ready.go:94] pod "kube-controller-manager-no-preload-713838" is "Ready"
	I1210 06:25:00.643711  326955 pod_ready.go:86] duration metric: took 178.834657ms for pod "kube-controller-manager-no-preload-713838" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:00.844018  326955 pod_ready.go:83] waiting for pod "kube-proxy-c62hk" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:01.244075  326955 pod_ready.go:94] pod "kube-proxy-c62hk" is "Ready"
	I1210 06:25:01.244105  326955 pod_ready.go:86] duration metric: took 400.060427ms for pod "kube-proxy-c62hk" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:01.445041  326955 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-713838" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:01.843827  326955 pod_ready.go:94] pod "kube-scheduler-no-preload-713838" is "Ready"
	I1210 06:25:01.843854  326955 pod_ready.go:86] duration metric: took 398.788804ms for pod "kube-scheduler-no-preload-713838" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:01.843867  326955 pod_ready.go:40] duration metric: took 32.407570406s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 06:25:01.891782  326955 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1210 06:25:01.897299  326955 out.go:179] * Done! kubectl is now configured to use "no-preload-713838" cluster and "default" namespace by default
	W1210 06:25:00.080872  327833 pod_ready.go:104] pod "coredns-66bc5c9577-gw75x" is not "Ready", error: <nil>
	W1210 06:25:02.579615  327833 pod_ready.go:104] pod "coredns-66bc5c9577-gw75x" is not "Ready", error: <nil>
	I1210 06:24:59.532875  336887 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1210 06:24:59.533186  336887 start.go:159] libmachine.API.Create for "newest-cni-126107" (driver="docker")
	I1210 06:24:59.533225  336887 client.go:173] LocalClient.Create starting
	I1210 06:24:59.533327  336887 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem
	I1210 06:24:59.533388  336887 main.go:143] libmachine: Decoding PEM data...
	I1210 06:24:59.533416  336887 main.go:143] libmachine: Parsing certificate...
	I1210 06:24:59.533500  336887 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22089-8832/.minikube/certs/cert.pem
	I1210 06:24:59.533540  336887 main.go:143] libmachine: Decoding PEM data...
	I1210 06:24:59.533557  336887 main.go:143] libmachine: Parsing certificate...
	I1210 06:24:59.533982  336887 cli_runner.go:164] Run: docker network inspect newest-cni-126107 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 06:24:59.552885  336887 cli_runner.go:211] docker network inspect newest-cni-126107 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 06:24:59.552988  336887 network_create.go:284] running [docker network inspect newest-cni-126107] to gather additional debugging logs...
	I1210 06:24:59.553008  336887 cli_runner.go:164] Run: docker network inspect newest-cni-126107
	W1210 06:24:59.572451  336887 cli_runner.go:211] docker network inspect newest-cni-126107 returned with exit code 1
	I1210 06:24:59.572534  336887 network_create.go:287] error running [docker network inspect newest-cni-126107]: docker network inspect newest-cni-126107: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-126107 not found
	I1210 06:24:59.572551  336887 network_create.go:289] output of [docker network inspect newest-cni-126107]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-126107 not found
	
	** /stderr **
	I1210 06:24:59.572710  336887 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:24:59.592775  336887 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-93569dd44e03 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:22:34:6b:89:a0:37} reservation:<nil>}
	I1210 06:24:59.593342  336887 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-2fbfa5ca31a8 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:6a:30:9e:0a:da:73} reservation:<nil>}
	I1210 06:24:59.594133  336887 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-68b4fc4b224b IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:0a:d7:21:69:83} reservation:<nil>}
	I1210 06:24:59.594915  336887 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-0a24a8ad90ff IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:32:ea:e5:16:4c:6f} reservation:<nil>}
	I1210 06:24:59.595927  336887 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001dd18e0}
	I1210 06:24:59.595955  336887 network_create.go:124] attempt to create docker network newest-cni-126107 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1210 06:24:59.596007  336887 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-126107 newest-cni-126107
	I1210 06:24:59.648242  336887 network_create.go:108] docker network newest-cni-126107 192.168.85.0/24 created
	I1210 06:24:59.648276  336887 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-126107" container
	I1210 06:24:59.648334  336887 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 06:24:59.667592  336887 cli_runner.go:164] Run: docker volume create newest-cni-126107 --label name.minikube.sigs.k8s.io=newest-cni-126107 --label created_by.minikube.sigs.k8s.io=true
	I1210 06:24:59.686982  336887 oci.go:103] Successfully created a docker volume newest-cni-126107
	I1210 06:24:59.687084  336887 cli_runner.go:164] Run: docker run --rm --name newest-cni-126107-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-126107 --entrypoint /usr/bin/test -v newest-cni-126107:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca -d /var/lib
	I1210 06:25:00.115171  336887 oci.go:107] Successfully prepared a docker volume newest-cni-126107
	I1210 06:25:00.115245  336887 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1210 06:25:00.115259  336887 kic.go:194] Starting extracting preloaded images to volume ...
	I1210 06:25:00.115360  336887 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22089-8832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-126107:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca -I lz4 -xf /preloaded.tar -C /extractDir
	I1210 06:25:04.112675  336887 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22089-8832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-126107:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca -I lz4 -xf /preloaded.tar -C /extractDir: (3.997248616s)
	I1210 06:25:04.112712  336887 kic.go:203] duration metric: took 3.997449096s to extract preloaded images to volume ...
	W1210 06:25:04.112837  336887 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1210 06:25:04.112877  336887 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1210 06:25:04.112928  336887 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 06:25:04.172016  336887 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-126107 --name newest-cni-126107 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-126107 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-126107 --network newest-cni-126107 --ip 192.168.85.2 --volume newest-cni-126107:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca
	W1210 06:25:00.387118  331193 pod_ready.go:104] pod "coredns-66bc5c9577-znsz6" is not "Ready", error: <nil>
	W1210 06:25:02.917573  331193 pod_ready.go:104] pod "coredns-66bc5c9577-znsz6" is not "Ready", error: <nil>
	W1210 06:25:04.579873  327833 pod_ready.go:104] pod "coredns-66bc5c9577-gw75x" is not "Ready", error: <nil>
	W1210 06:25:06.580394  327833 pod_ready.go:104] pod "coredns-66bc5c9577-gw75x" is not "Ready", error: <nil>
	I1210 06:25:07.580576  327833 pod_ready.go:94] pod "coredns-66bc5c9577-gw75x" is "Ready"
	I1210 06:25:07.580605  327833 pod_ready.go:86] duration metric: took 37.506619554s for pod "coredns-66bc5c9577-gw75x" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:07.583509  327833 pod_ready.go:83] waiting for pod "etcd-embed-certs-133470" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:07.587865  327833 pod_ready.go:94] pod "etcd-embed-certs-133470" is "Ready"
	I1210 06:25:07.587890  327833 pod_ready.go:86] duration metric: took 4.359471ms for pod "etcd-embed-certs-133470" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:07.590170  327833 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-133470" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:07.594746  327833 pod_ready.go:94] pod "kube-apiserver-embed-certs-133470" is "Ready"
	I1210 06:25:07.594774  327833 pod_ready.go:86] duration metric: took 4.57905ms for pod "kube-apiserver-embed-certs-133470" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:07.596975  327833 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-133470" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:07.778320  327833 pod_ready.go:94] pod "kube-controller-manager-embed-certs-133470" is "Ready"
	I1210 06:25:07.778347  327833 pod_ready.go:86] duration metric: took 181.346408ms for pod "kube-controller-manager-embed-certs-133470" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:07.979006  327833 pod_ready.go:83] waiting for pod "kube-proxy-fkdk9" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:08.378607  327833 pod_ready.go:94] pod "kube-proxy-fkdk9" is "Ready"
	I1210 06:25:08.378631  327833 pod_ready.go:86] duration metric: took 399.601345ms for pod "kube-proxy-fkdk9" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:08.578014  327833 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-133470" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:08.978761  327833 pod_ready.go:94] pod "kube-scheduler-embed-certs-133470" is "Ready"
	I1210 06:25:08.978787  327833 pod_ready.go:86] duration metric: took 400.749384ms for pod "kube-scheduler-embed-certs-133470" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:08.978798  327833 pod_ready.go:40] duration metric: took 38.909473428s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 06:25:09.028286  327833 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1210 06:25:09.030218  327833 out.go:179] * Done! kubectl is now configured to use "embed-certs-133470" cluster and "default" namespace by default
	I1210 06:25:04.481386  336887 cli_runner.go:164] Run: docker container inspect newest-cni-126107 --format={{.State.Running}}
	I1210 06:25:04.502244  336887 cli_runner.go:164] Run: docker container inspect newest-cni-126107 --format={{.State.Status}}
	I1210 06:25:04.522735  336887 cli_runner.go:164] Run: docker exec newest-cni-126107 stat /var/lib/dpkg/alternatives/iptables
	I1210 06:25:04.571010  336887 oci.go:144] the created container "newest-cni-126107" has a running status.
	I1210 06:25:04.571044  336887 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22089-8832/.minikube/machines/newest-cni-126107/id_rsa...
	I1210 06:25:04.663409  336887 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22089-8832/.minikube/machines/newest-cni-126107/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 06:25:04.690550  336887 cli_runner.go:164] Run: docker container inspect newest-cni-126107 --format={{.State.Status}}
	I1210 06:25:04.713575  336887 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 06:25:04.713604  336887 kic_runner.go:114] Args: [docker exec --privileged newest-cni-126107 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 06:25:04.767064  336887 cli_runner.go:164] Run: docker container inspect newest-cni-126107 --format={{.State.Status}}
	I1210 06:25:04.791773  336887 machine.go:94] provisionDockerMachine start ...
	I1210 06:25:04.791873  336887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:04.819325  336887 main.go:143] libmachine: Using SSH client type: native
	I1210 06:25:04.819813  336887 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1210 06:25:04.819834  336887 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 06:25:04.820667  336887 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1210 06:25:07.958166  336887 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-126107
	
	I1210 06:25:07.958195  336887 ubuntu.go:182] provisioning hostname "newest-cni-126107"
	I1210 06:25:07.958260  336887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:07.980501  336887 main.go:143] libmachine: Using SSH client type: native
	I1210 06:25:07.980710  336887 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1210 06:25:07.980728  336887 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-126107 && echo "newest-cni-126107" | sudo tee /etc/hostname
	I1210 06:25:08.127040  336887 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-126107
	
	I1210 06:25:08.127128  336887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:08.147687  336887 main.go:143] libmachine: Using SSH client type: native
	I1210 06:25:08.147963  336887 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1210 06:25:08.147982  336887 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-126107' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-126107/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-126107' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 06:25:08.283513  336887 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:25:08.283545  336887 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22089-8832/.minikube CaCertPath:/home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22089-8832/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22089-8832/.minikube}
	I1210 06:25:08.283569  336887 ubuntu.go:190] setting up certificates
	I1210 06:25:08.283582  336887 provision.go:84] configureAuth start
	I1210 06:25:08.283641  336887 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-126107
	I1210 06:25:08.304777  336887 provision.go:143] copyHostCerts
	I1210 06:25:08.304859  336887 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-8832/.minikube/ca.pem, removing ...
	I1210 06:25:08.304870  336887 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-8832/.minikube/ca.pem
	I1210 06:25:08.304943  336887 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22089-8832/.minikube/ca.pem (1078 bytes)
	I1210 06:25:08.305028  336887 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-8832/.minikube/cert.pem, removing ...
	I1210 06:25:08.305036  336887 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-8832/.minikube/cert.pem
	I1210 06:25:08.305061  336887 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22089-8832/.minikube/cert.pem (1123 bytes)
	I1210 06:25:08.305130  336887 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-8832/.minikube/key.pem, removing ...
	I1210 06:25:08.305138  336887 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-8832/.minikube/key.pem
	I1210 06:25:08.305161  336887 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22089-8832/.minikube/key.pem (1675 bytes)
	I1210 06:25:08.305231  336887 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22089-8832/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca-key.pem org=jenkins.newest-cni-126107 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-126107]
	I1210 06:25:08.358046  336887 provision.go:177] copyRemoteCerts
	I1210 06:25:08.358115  336887 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 06:25:08.358153  336887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:08.378428  336887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/newest-cni-126107/id_rsa Username:docker}
	I1210 06:25:08.475365  336887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 06:25:08.497101  336887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 06:25:08.517033  336887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 06:25:08.536354  336887 provision.go:87] duration metric: took 252.752199ms to configureAuth
	I1210 06:25:08.536379  336887 ubuntu.go:206] setting minikube options for container-runtime
	I1210 06:25:08.536554  336887 config.go:182] Loaded profile config "newest-cni-126107": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 06:25:08.536656  336887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:08.556388  336887 main.go:143] libmachine: Using SSH client type: native
	I1210 06:25:08.556749  336887 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1210 06:25:08.556781  336887 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 06:25:08.835275  336887 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 06:25:08.835301  336887 machine.go:97] duration metric: took 4.043503325s to provisionDockerMachine
	I1210 06:25:08.835313  336887 client.go:176] duration metric: took 9.302078213s to LocalClient.Create
	I1210 06:25:08.835335  336887 start.go:167] duration metric: took 9.302149263s to libmachine.API.Create "newest-cni-126107"
	I1210 06:25:08.835345  336887 start.go:293] postStartSetup for "newest-cni-126107" (driver="docker")
	I1210 06:25:08.835361  336887 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 06:25:08.835432  336887 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 06:25:08.835497  336887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:08.855854  336887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/newest-cni-126107/id_rsa Username:docker}
	I1210 06:25:08.956961  336887 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 06:25:08.961167  336887 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 06:25:08.961201  336887 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 06:25:08.961213  336887 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-8832/.minikube/addons for local assets ...
	I1210 06:25:08.961271  336887 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-8832/.minikube/files for local assets ...
	I1210 06:25:08.961344  336887 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-8832/.minikube/files/etc/ssl/certs/123742.pem -> 123742.pem in /etc/ssl/certs
	I1210 06:25:08.961433  336887 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 06:25:08.970695  336887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/files/etc/ssl/certs/123742.pem --> /etc/ssl/certs/123742.pem (1708 bytes)
	I1210 06:25:08.995442  336887 start.go:296] duration metric: took 160.082878ms for postStartSetup
	I1210 06:25:08.995880  336887 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-126107
	I1210 06:25:09.016559  336887 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/config.json ...
	I1210 06:25:09.016908  336887 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:25:09.016964  336887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:09.038838  336887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/newest-cni-126107/id_rsa Username:docker}
	I1210 06:25:09.139907  336887 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 06:25:09.145902  336887 start.go:128] duration metric: took 9.616033039s to createHost
	I1210 06:25:09.145930  336887 start.go:83] releasing machines lock for "newest-cni-126107", held for 9.616152275s
	I1210 06:25:09.146007  336887 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-126107
	I1210 06:25:09.166587  336887 ssh_runner.go:195] Run: cat /version.json
	I1210 06:25:09.166650  336887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:09.166669  336887 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 06:25:09.166759  336887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:09.189521  336887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/newest-cni-126107/id_rsa Username:docker}
	I1210 06:25:09.189525  336887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/newest-cni-126107/id_rsa Username:docker}
	I1210 06:25:09.284007  336887 ssh_runner.go:195] Run: systemctl --version
	W1210 06:25:05.386403  331193 pod_ready.go:104] pod "coredns-66bc5c9577-znsz6" is not "Ready", error: <nil>
	W1210 06:25:07.387202  331193 pod_ready.go:104] pod "coredns-66bc5c9577-znsz6" is not "Ready", error: <nil>
	W1210 06:25:09.387389  331193 pod_ready.go:104] pod "coredns-66bc5c9577-znsz6" is not "Ready", error: <nil>
	I1210 06:25:09.351948  336887 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 06:25:09.392017  336887 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 06:25:09.397100  336887 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 06:25:09.397159  336887 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 06:25:09.426437  336887 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 06:25:09.426486  336887 start.go:496] detecting cgroup driver to use...
	I1210 06:25:09.426524  336887 detect.go:190] detected "systemd" cgroup driver on host os
	I1210 06:25:09.426570  336887 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 06:25:09.444100  336887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 06:25:09.457503  336887 docker.go:218] disabling cri-docker service (if available) ...
	I1210 06:25:09.457569  336887 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 06:25:09.475303  336887 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 06:25:09.495265  336887 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 06:25:09.584209  336887 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 06:25:09.673201  336887 docker.go:234] disabling docker service ...
	I1210 06:25:09.673262  336887 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 06:25:09.692964  336887 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 06:25:09.706562  336887 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 06:25:09.794361  336887 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 06:25:09.886009  336887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 06:25:09.899964  336887 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:25:09.915638  336887 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 06:25:09.915690  336887 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:25:09.927534  336887 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1210 06:25:09.927591  336887 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:25:09.937774  336887 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:25:09.947722  336887 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:25:09.957780  336887 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 06:25:09.967038  336887 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:25:09.977926  336887 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:25:09.993658  336887 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:25:10.003638  336887 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 06:25:10.012100  336887 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 06:25:10.021305  336887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:25:10.110274  336887 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 06:25:10.246619  336887 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 06:25:10.246690  336887 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 06:25:10.251096  336887 start.go:564] Will wait 60s for crictl version
	I1210 06:25:10.251165  336887 ssh_runner.go:195] Run: which crictl
	I1210 06:25:10.255306  336887 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 06:25:10.283066  336887 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 06:25:10.283157  336887 ssh_runner.go:195] Run: crio --version
	I1210 06:25:10.313027  336887 ssh_runner.go:195] Run: crio --version
	I1210 06:25:10.346493  336887 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1210 06:25:10.348155  336887 cli_runner.go:164] Run: docker network inspect newest-cni-126107 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:25:10.367398  336887 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1210 06:25:10.371843  336887 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:25:10.385684  336887 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1210 06:25:10.387117  336887 kubeadm.go:884] updating cluster {Name:newest-cni-126107 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-126107 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 06:25:10.387245  336887 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1210 06:25:10.387300  336887 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:25:10.421783  336887 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 06:25:10.421805  336887 crio.go:433] Images already preloaded, skipping extraction
	I1210 06:25:10.421852  336887 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:25:10.448367  336887 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 06:25:10.448389  336887 cache_images.go:86] Images are preloaded, skipping loading
	I1210 06:25:10.448395  336887 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 crio true true} ...
	I1210 06:25:10.448494  336887 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-126107 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-126107 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 06:25:10.448573  336887 ssh_runner.go:195] Run: crio config
	I1210 06:25:10.498037  336887 cni.go:84] Creating CNI manager for ""
	I1210 06:25:10.498063  336887 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:25:10.498081  336887 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1210 06:25:10.498120  336887 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-126107 NodeName:newest-cni-126107 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 06:25:10.498246  336887 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-126107"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 06:25:10.498306  336887 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1210 06:25:10.507229  336887 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 06:25:10.507302  336887 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 06:25:10.516385  336887 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1210 06:25:10.530854  336887 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1210 06:25:10.548260  336887 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1210 06:25:10.563281  336887 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1210 06:25:10.567436  336887 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:25:10.578747  336887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:25:10.660880  336887 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:25:10.688248  336887 certs.go:69] Setting up /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107 for IP: 192.168.85.2
	I1210 06:25:10.688268  336887 certs.go:195] generating shared ca certs ...
	I1210 06:25:10.688286  336887 certs.go:227] acquiring lock for ca certs: {Name:mkfe434cecfa5233603e8d01fb39a21abb4f8ca4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:25:10.688431  336887 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22089-8832/.minikube/ca.key
	I1210 06:25:10.688526  336887 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22089-8832/.minikube/proxy-client-ca.key
	I1210 06:25:10.688544  336887 certs.go:257] generating profile certs ...
	I1210 06:25:10.688612  336887 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/client.key
	I1210 06:25:10.688636  336887 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/client.crt with IP's: []
	I1210 06:25:10.813463  336887 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/client.crt ...
	I1210 06:25:10.813530  336887 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/client.crt: {Name:mk7009f3bf80c2397e5ae6cdebdca2735a7f7b63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:25:10.813756  336887 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/client.key ...
	I1210 06:25:10.813772  336887 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/client.key: {Name:mk6d255207a819b82a749c48b0009054007ff91d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:25:10.813864  336887 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/apiserver.key.23b909bf
	I1210 06:25:10.813882  336887 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/apiserver.crt.23b909bf with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1210 06:25:11.022417  336887 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/apiserver.crt.23b909bf ...
	I1210 06:25:11.022443  336887 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/apiserver.crt.23b909bf: {Name:mk09a2e21f902ac4eed926780c1f90cb426b5a2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:25:11.022619  336887 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/apiserver.key.23b909bf ...
	I1210 06:25:11.022632  336887 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/apiserver.key.23b909bf: {Name:mkc73ed6c35fb6a21244daf518e5b2d0a7440a3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:25:11.022704  336887 certs.go:382] copying /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/apiserver.crt.23b909bf -> /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/apiserver.crt
	I1210 06:25:11.022778  336887 certs.go:386] copying /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/apiserver.key.23b909bf -> /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/apiserver.key
	I1210 06:25:11.022831  336887 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/proxy-client.key
	I1210 06:25:11.022848  336887 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/proxy-client.crt with IP's: []
	I1210 06:25:11.088507  336887 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/proxy-client.crt ...
	I1210 06:25:11.088534  336887 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/proxy-client.crt: {Name:mkdd3c9abbfeb78fdbbafdaf53f324a4a2e625ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:25:11.088686  336887 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/proxy-client.key ...
	I1210 06:25:11.088699  336887 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/proxy-client.key: {Name:mkd22ad5ae4429236c87cce8641338a9393df47a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:25:11.088869  336887 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/12374.pem (1338 bytes)
	W1210 06:25:11.088906  336887 certs.go:480] ignoring /home/jenkins/minikube-integration/22089-8832/.minikube/certs/12374_empty.pem, impossibly tiny 0 bytes
	I1210 06:25:11.088917  336887 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca-key.pem (1679 bytes)
	I1210 06:25:11.088939  336887 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem (1078 bytes)
	I1210 06:25:11.088963  336887 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/cert.pem (1123 bytes)
	I1210 06:25:11.088988  336887 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/key.pem (1675 bytes)
	I1210 06:25:11.089034  336887 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/files/etc/ssl/certs/123742.pem (1708 bytes)
	I1210 06:25:11.089621  336887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 06:25:11.108552  336887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 06:25:11.127416  336887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 06:25:11.146079  336887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 06:25:11.164732  336887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 06:25:11.183864  336887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 06:25:11.202457  336887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 06:25:11.221380  336887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 06:25:11.241165  336887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/files/etc/ssl/certs/123742.pem --> /usr/share/ca-certificates/123742.pem (1708 bytes)
	I1210 06:25:11.262201  336887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 06:25:11.282304  336887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/certs/12374.pem --> /usr/share/ca-certificates/12374.pem (1338 bytes)
	I1210 06:25:11.302104  336887 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 06:25:11.316208  336887 ssh_runner.go:195] Run: openssl version
	I1210 06:25:11.323011  336887 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12374.pem
	I1210 06:25:11.331150  336887 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12374.pem /etc/ssl/certs/12374.pem
	I1210 06:25:11.339353  336887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12374.pem
	I1210 06:25:11.343453  336887 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:52 /usr/share/ca-certificates/12374.pem
	I1210 06:25:11.343539  336887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12374.pem
	I1210 06:25:11.378191  336887 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 06:25:11.387532  336887 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/12374.pem /etc/ssl/certs/51391683.0
	I1210 06:25:11.395709  336887 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/123742.pem
	I1210 06:25:11.403915  336887 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/123742.pem /etc/ssl/certs/123742.pem
	I1210 06:25:11.413083  336887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/123742.pem
	I1210 06:25:11.417256  336887 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:52 /usr/share/ca-certificates/123742.pem
	I1210 06:25:11.417315  336887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/123742.pem
	I1210 06:25:11.452744  336887 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 06:25:11.460975  336887 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/123742.pem /etc/ssl/certs/3ec20f2e.0
	I1210 06:25:11.468848  336887 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:25:11.477072  336887 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 06:25:11.485572  336887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:25:11.490083  336887 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:25:11.490144  336887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:25:11.529873  336887 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 06:25:11.538675  336887 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 06:25:11.547942  336887 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:25:11.552437  336887 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 06:25:11.552529  336887 kubeadm.go:401] StartCluster: {Name:newest-cni-126107 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-126107 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:25:11.552617  336887 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 06:25:11.552673  336887 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:25:11.582819  336887 cri.go:89] found id: ""
	I1210 06:25:11.582893  336887 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 06:25:11.591576  336887 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 06:25:11.600085  336887 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 06:25:11.600143  336887 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 06:25:11.608700  336887 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 06:25:11.608723  336887 kubeadm.go:158] found existing configuration files:
	
	I1210 06:25:11.608773  336887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 06:25:11.617207  336887 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 06:25:11.617265  336887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 06:25:11.625691  336887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 06:25:11.634058  336887 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 06:25:11.634138  336887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 06:25:11.642174  336887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 06:25:11.650696  336887 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 06:25:11.650751  336887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 06:25:11.658854  336887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 06:25:11.667261  336887 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 06:25:11.667309  336887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 06:25:11.675445  336887 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 06:25:11.717793  336887 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1210 06:25:11.717857  336887 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 06:25:11.787773  336887 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 06:25:11.787862  336887 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1210 06:25:11.787918  336887 kubeadm.go:319] OS: Linux
	I1210 06:25:11.788013  336887 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 06:25:11.788088  336887 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 06:25:11.788209  336887 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 06:25:11.788287  336887 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 06:25:11.788329  336887 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 06:25:11.788400  336887 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 06:25:11.788501  336887 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 06:25:11.788573  336887 kubeadm.go:319] CGROUPS_IO: enabled
	I1210 06:25:11.851680  336887 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 06:25:11.851818  336887 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 06:25:11.851989  336887 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 06:25:11.859860  336887 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 06:25:11.863122  336887 out.go:252]   - Generating certificates and keys ...
	I1210 06:25:11.863226  336887 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 06:25:11.863328  336887 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 06:25:11.994891  336887 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 06:25:12.216319  336887 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 06:25:12.263074  336887 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 06:25:12.317348  336887 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 06:25:12.348525  336887 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 06:25:12.348673  336887 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-126107] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1210 06:25:12.453542  336887 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 06:25:12.453734  336887 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-126107] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1210 06:25:12.554979  336887 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 06:25:12.639691  336887 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 06:25:12.675769  336887 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 06:25:12.675887  336887 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 06:25:12.733954  336887 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 06:25:12.762974  336887 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 06:25:12.895579  336887 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 06:25:12.968568  336887 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 06:25:13.242877  336887 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 06:25:13.243493  336887 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 06:25:13.247727  336887 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 06:25:13.249454  336887 out.go:252]   - Booting up control plane ...
	I1210 06:25:13.249584  336887 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 06:25:13.249689  336887 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 06:25:13.249772  336887 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 06:25:13.266130  336887 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 06:25:13.266243  336887 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 06:25:13.273740  336887 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 06:25:13.274070  336887 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 06:25:13.274119  336887 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 06:25:13.387904  336887 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 06:25:13.388113  336887 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 06:25:13.888860  336887 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 500.995328ms
	I1210 06:25:13.892049  336887 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1210 06:25:13.892166  336887 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1210 06:25:13.892313  336887 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1210 06:25:13.892419  336887 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1210 06:25:11.887626  331193 pod_ready.go:104] pod "coredns-66bc5c9577-znsz6" is not "Ready", error: <nil>
	W1210 06:25:14.389916  331193 pod_ready.go:104] pod "coredns-66bc5c9577-znsz6" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 10 06:24:50 no-preload-713838 crio[564]: time="2025-12-10T06:24:50.052041074Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:24:50 no-preload-713838 crio[564]: time="2025-12-10T06:24:50.052914709Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:24:50 no-preload-713838 crio[564]: time="2025-12-10T06:24:50.098987069Z" level=info msg="Created container 8cf7653df43bfb019a879555ed0b3523ed1144c91d66cad14bf2f900672b3e97: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-p6wnm/dashboard-metrics-scraper" id=cba31ee2-2f19-4ad6-8f9b-39e560948695 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:24:50 no-preload-713838 crio[564]: time="2025-12-10T06:24:50.099801464Z" level=info msg="Starting container: 8cf7653df43bfb019a879555ed0b3523ed1144c91d66cad14bf2f900672b3e97" id=42b90fa6-0ab3-4837-a72c-c6b50c50d1cd name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 06:24:50 no-preload-713838 crio[564]: time="2025-12-10T06:24:50.102261609Z" level=info msg="Started container" PID=1748 containerID=8cf7653df43bfb019a879555ed0b3523ed1144c91d66cad14bf2f900672b3e97 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-p6wnm/dashboard-metrics-scraper id=42b90fa6-0ab3-4837-a72c-c6b50c50d1cd name=/runtime.v1.RuntimeService/StartContainer sandboxID=345910942c6d4641cddc49fd871bbcb7437009051a237bb21a21a4e592bd736a
	Dec 10 06:24:50 no-preload-713838 crio[564]: time="2025-12-10T06:24:50.163243895Z" level=info msg="Removing container: 34bb97e4191518cc5deb571523e1c1c72203c3e21fec3593849df7314454307c" id=6a605486-f1b6-47a5-b59a-1fdb229a5d76 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 10 06:24:50 no-preload-713838 crio[564]: time="2025-12-10T06:24:50.180230637Z" level=info msg="Removed container 34bb97e4191518cc5deb571523e1c1c72203c3e21fec3593849df7314454307c: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-p6wnm/dashboard-metrics-scraper" id=6a605486-f1b6-47a5-b59a-1fdb229a5d76 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 10 06:24:59 no-preload-713838 crio[564]: time="2025-12-10T06:24:59.187657225Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=b6e54868-f385-42ca-9935-34079c95d055 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:24:59 no-preload-713838 crio[564]: time="2025-12-10T06:24:59.188724273Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=810cc2d5-3ff7-4298-9139-82383ea03f72 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:24:59 no-preload-713838 crio[564]: time="2025-12-10T06:24:59.189790066Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=1bd0eeee-7bb9-4613-bdc4-105436abd31b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:24:59 no-preload-713838 crio[564]: time="2025-12-10T06:24:59.189919119Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:24:59 no-preload-713838 crio[564]: time="2025-12-10T06:24:59.194943362Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:24:59 no-preload-713838 crio[564]: time="2025-12-10T06:24:59.195129843Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/750129e30dbd7296173f32bee2306c7fb3d89d60f6312e401c95b6e52558e5e5/merged/etc/passwd: no such file or directory"
	Dec 10 06:24:59 no-preload-713838 crio[564]: time="2025-12-10T06:24:59.19516514Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/750129e30dbd7296173f32bee2306c7fb3d89d60f6312e401c95b6e52558e5e5/merged/etc/group: no such file or directory"
	Dec 10 06:24:59 no-preload-713838 crio[564]: time="2025-12-10T06:24:59.195461868Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:24:59 no-preload-713838 crio[564]: time="2025-12-10T06:24:59.234027958Z" level=info msg="Created container a85f655104f8de9cbb4bfafb1587d4e6d12c001b4413e2dded406cb8f4a9411a: kube-system/storage-provisioner/storage-provisioner" id=1bd0eeee-7bb9-4613-bdc4-105436abd31b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:24:59 no-preload-713838 crio[564]: time="2025-12-10T06:24:59.234869567Z" level=info msg="Starting container: a85f655104f8de9cbb4bfafb1587d4e6d12c001b4413e2dded406cb8f4a9411a" id=023b35c5-7e46-4276-9597-3af59bb2e1ac name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 06:24:59 no-preload-713838 crio[564]: time="2025-12-10T06:24:59.236948399Z" level=info msg="Started container" PID=1762 containerID=a85f655104f8de9cbb4bfafb1587d4e6d12c001b4413e2dded406cb8f4a9411a description=kube-system/storage-provisioner/storage-provisioner id=023b35c5-7e46-4276-9597-3af59bb2e1ac name=/runtime.v1.RuntimeService/StartContainer sandboxID=f1db7a88a9cb146f8d3e1d996e0f11efeba57fc7e08ce35bb34bfb350475a724
	Dec 10 06:25:14 no-preload-713838 crio[564]: time="2025-12-10T06:25:14.035942199Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=4a0471d0-7373-4fae-bbba-d4d28dbb39f2 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:25:14 no-preload-713838 crio[564]: time="2025-12-10T06:25:14.037037757Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=3323b6dd-ebd0-4845-9e48-13668b37f81e name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:25:14 no-preload-713838 crio[564]: time="2025-12-10T06:25:14.038141408Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-p6wnm/dashboard-metrics-scraper" id=762ee4cc-469e-4d15-96b3-3b3d053705be name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:25:14 no-preload-713838 crio[564]: time="2025-12-10T06:25:14.038281125Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:25:14 no-preload-713838 crio[564]: time="2025-12-10T06:25:14.044992817Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:25:14 no-preload-713838 crio[564]: time="2025-12-10T06:25:14.045643056Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:25:14 no-preload-713838 crio[564]: time="2025-12-10T06:25:14.098319735Z" level=info msg="CreateCtr: context was either canceled or the deadline was exceeded: context canceled" id=762ee4cc-469e-4d15-96b3-3b3d053705be name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	a85f655104f8d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           17 seconds ago      Running             storage-provisioner         1                   f1db7a88a9cb1       storage-provisioner                          kube-system
	8cf7653df43bf       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           26 seconds ago      Exited              dashboard-metrics-scraper   2                   345910942c6d4       dashboard-metrics-scraper-867fb5f87b-p6wnm   kubernetes-dashboard
	1323310fa79d9       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   41 seconds ago      Running             kubernetes-dashboard        0                   9082eef2bf85b       kubernetes-dashboard-b84665fb8-5pf6p         kubernetes-dashboard
	b8b7bbd3a73fd       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           48 seconds ago      Running             coredns                     0                   58ef475d91786       coredns-7d764666f9-hr4gk                     kube-system
	8a59823d5d04a       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           48 seconds ago      Running             busybox                     1                   308e18b34d485       busybox                                      default
	c9af406a71bb6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           48 seconds ago      Exited              storage-provisioner         0                   f1db7a88a9cb1       storage-provisioner                          kube-system
	8b0ce37d641b8       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           48 seconds ago      Running             kindnet-cni                 0                   298863928bd45       kindnet-28s4q                                kube-system
	1674e78c0eb17       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                           48 seconds ago      Running             kube-proxy                  0                   e8ef6f46aa6cb       kube-proxy-c62hk                             kube-system
	7a81e637bcbb8       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b                                           51 seconds ago      Running             kube-apiserver              0                   256e00e0c17c6       kube-apiserver-no-preload-713838             kube-system
	626d40c34f5a9       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           51 seconds ago      Running             etcd                        0                   be572d6cd6b9d       etcd-no-preload-713838                       kube-system
	7e3a6ab1e6a60       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                           51 seconds ago      Running             kube-scheduler              0                   303ae97049cc8       kube-scheduler-no-preload-713838             kube-system
	352c8f0e348fa       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                           51 seconds ago      Running             kube-controller-manager     0                   ffe78e4cd7ce3       kube-controller-manager-no-preload-713838    kube-system
	
	
	==> coredns [b8b7bbd3a73fd38688e69778eb82aaaf4c797868eabe1e3f152394098b131417] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:51330 - 49313 "HINFO IN 6347578993037087190.1957112465553760425. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.060421288s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               no-preload-713838
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-713838
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=edc6abd3c0573b88c7a02dc35aa0b985627fa3e9
	                    minikube.k8s.io/name=no-preload-713838
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T06_23_27_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 06:23:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-713838
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 06:25:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 06:24:58 +0000   Wed, 10 Dec 2025 06:23:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 06:24:58 +0000   Wed, 10 Dec 2025 06:23:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 06:24:58 +0000   Wed, 10 Dec 2025 06:23:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Dec 2025 06:24:58 +0000   Wed, 10 Dec 2025 06:23:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    no-preload-713838
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 0992b7e47f4f804d2f02c3066938a460
	  System UUID:                a0db2673-3e21-49dd-84c2-7b2766bdcea4
	  Boot ID:                    cce7104c-1270-4b6b-af66-b04ce0de633c
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 coredns-7d764666f9-hr4gk                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     105s
	  kube-system                 etcd-no-preload-713838                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         112s
	  kube-system                 kindnet-28s4q                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      105s
	  kube-system                 kube-apiserver-no-preload-713838              250m (3%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-controller-manager-no-preload-713838     200m (2%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-proxy-c62hk                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-scheduler-no-preload-713838              100m (1%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-p6wnm    0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-5pf6p          0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  107s  node-controller  Node no-preload-713838 event: Registered Node no-preload-713838 in Controller
	  Normal  RegisteredNode  47s   node-controller  Node no-preload-713838 event: Registered Node no-preload-713838 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 67 21 4d 5d 12 08 06
	[Dec10 06:21] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 7e b1 cc cb 4a c1 08 06
	[  +0.000451] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f6 67 21 4d 5d 12 08 06
	[ +47.984386] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 4e cf 84 e2 64 08 06
	[  +1.136322] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000001] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0e cf a5 c8 c4 7c 08 06
	[  +0.000001] ll header: 00000000: ff ff ff ff ff ff 4e 6d f9 62 31 99 08 06
	[Dec10 06:22] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 16 a6 95 b8 c2 8a 08 06
	[ +10.598490] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 42 35 90 e5 6e e9 08 06
	[  +0.000401] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 36 4e cf 84 e2 64 08 06
	[ +28.872835] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a 53 b5 51 38 03 08 06
	[  +0.000413] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4e 6d f9 62 31 99 08 06
	[  +9.820727] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6e c5 0b 85 ba 10 08 06
	[  +0.000485] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 16 a6 95 b8 c2 8a 08 06
	
	
	==> etcd [626d40c34f5a9cc949c8b0c13c01036e5fa575714b2210e614b23214089d41e2] <==
	{"level":"warn","ts":"2025-12-10T06:24:26.838019Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:26.847442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:26.864775Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:26.874766Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:26.887002Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:26.905202Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:26.914764Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:26.925295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:26.941670Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:26.949650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:26.959857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:26.970881Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:26.981333Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:26.992443Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:27.001710Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:27.011633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:27.021017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:27.029708Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:27.039140Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:27.052068Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:27.061018Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:27.076065Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:27.085410Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:27.094813Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:27.104420Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59866","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 06:25:17 up  1:07,  0 user,  load average: 4.79, 4.82, 3.04
	Linux no-preload-713838 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8b0ce37d641b86571cb5fe3e7bea6acb3968e201d71d2f8e58691e954136608d] <==
	I1210 06:24:28.656036       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1210 06:24:28.658541       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1210 06:24:28.658768       1 main.go:148] setting mtu 1500 for CNI 
	I1210 06:24:28.658795       1 main.go:178] kindnetd IP family: "ipv4"
	I1210 06:24:28.658824       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-10T06:24:28Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1210 06:24:28.953435       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1210 06:24:28.953523       1 controller.go:381] "Waiting for informer caches to sync"
	I1210 06:24:28.953541       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1210 06:24:28.954620       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1210 06:24:29.354522       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1210 06:24:29.354562       1 metrics.go:72] Registering metrics
	I1210 06:24:29.354639       1 controller.go:711] "Syncing nftables rules"
	I1210 06:24:38.953909       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1210 06:24:38.953968       1 main.go:301] handling current node
	I1210 06:24:48.953350       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1210 06:24:48.953403       1 main.go:301] handling current node
	I1210 06:24:58.953909       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1210 06:24:58.953949       1 main.go:301] handling current node
	I1210 06:25:08.960586       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1210 06:25:08.960620       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7a81e637bcbb822a79d3c9d17ceb44a480481cefb6fd1bd5c4f5c51620d65578] <==
	I1210 06:24:27.721163       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1210 06:24:27.721909       1 aggregator.go:187] initial CRD sync complete...
	I1210 06:24:27.722917       1 autoregister_controller.go:144] Starting autoregister controller
	I1210 06:24:27.722968       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1210 06:24:27.722993       1 cache.go:39] Caches are synced for autoregister controller
	I1210 06:24:27.721100       1 shared_informer.go:377] "Caches are synced"
	I1210 06:24:27.721090       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1210 06:24:27.725492       1 shared_informer.go:377] "Caches are synced"
	I1210 06:24:27.733188       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1210 06:24:27.737645       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1210 06:24:27.754603       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1210 06:24:27.758331       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 06:24:28.132544       1 controller.go:667] quota admission added evaluator for: namespaces
	I1210 06:24:28.159751       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1210 06:24:28.187201       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1210 06:24:28.214122       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1210 06:24:28.226609       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1210 06:24:28.281062       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.242.60"}
	I1210 06:24:28.302862       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.26.231"}
	I1210 06:24:28.623340       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1210 06:24:31.331071       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1210 06:24:31.479852       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1210 06:24:31.479852       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1210 06:24:31.529421       1 controller.go:667] quota admission added evaluator for: endpoints
	I1210 06:24:31.529421       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [352c8f0e348fa006abc84878109c4605c54ea03f96f88b48143b6b659f4b95cb] <==
	I1210 06:24:30.889838       1 shared_informer.go:377] "Caches are synced"
	I1210 06:24:30.889853       1 shared_informer.go:377] "Caches are synced"
	I1210 06:24:30.889549       1 shared_informer.go:377] "Caches are synced"
	I1210 06:24:30.889989       1 shared_informer.go:377] "Caches are synced"
	I1210 06:24:30.890028       1 shared_informer.go:377] "Caches are synced"
	I1210 06:24:30.890044       1 shared_informer.go:377] "Caches are synced"
	I1210 06:24:30.890161       1 shared_informer.go:377] "Caches are synced"
	I1210 06:24:30.890197       1 shared_informer.go:377] "Caches are synced"
	I1210 06:24:30.890200       1 shared_informer.go:377] "Caches are synced"
	I1210 06:24:30.890350       1 shared_informer.go:377] "Caches are synced"
	I1210 06:24:30.890652       1 shared_informer.go:377] "Caches are synced"
	I1210 06:24:30.890689       1 shared_informer.go:377] "Caches are synced"
	I1210 06:24:30.891301       1 shared_informer.go:377] "Caches are synced"
	I1210 06:24:30.891565       1 shared_informer.go:377] "Caches are synced"
	I1210 06:24:30.889260       1 shared_informer.go:377] "Caches are synced"
	I1210 06:24:30.889675       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1210 06:24:30.892059       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="no-preload-713838"
	I1210 06:24:30.892127       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1210 06:24:30.893717       1 shared_informer.go:377] "Caches are synced"
	I1210 06:24:30.896744       1 shared_informer.go:370] "Waiting for caches to sync"
	I1210 06:24:30.900447       1 shared_informer.go:377] "Caches are synced"
	I1210 06:24:30.988382       1 shared_informer.go:377] "Caches are synced"
	I1210 06:24:30.988407       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1210 06:24:30.988414       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1210 06:24:30.997525       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [1674e78c0eb170c33491a539a64875481799478be07a401ad67fa61986708cd8] <==
	I1210 06:24:28.484406       1 server_linux.go:53] "Using iptables proxy"
	I1210 06:24:28.568226       1 shared_informer.go:370] "Waiting for caches to sync"
	I1210 06:24:28.668904       1 shared_informer.go:377] "Caches are synced"
	I1210 06:24:28.668951       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1210 06:24:28.669050       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 06:24:28.741139       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1210 06:24:28.741211       1 server_linux.go:136] "Using iptables Proxier"
	I1210 06:24:28.748007       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 06:24:28.748503       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1210 06:24:28.748585       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 06:24:28.750267       1 config.go:200] "Starting service config controller"
	I1210 06:24:28.750288       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 06:24:28.750397       1 config.go:106] "Starting endpoint slice config controller"
	I1210 06:24:28.750409       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 06:24:28.750455       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 06:24:28.750461       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 06:24:28.750465       1 config.go:309] "Starting node config controller"
	I1210 06:24:28.750514       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 06:24:28.750521       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 06:24:28.850682       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1210 06:24:28.850826       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1210 06:24:28.852672       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [7e3a6ab1e6a60502ba96ff6d0a9a8e22ac37e396772330314f4ad7f55de8b26b] <==
	I1210 06:24:25.821099       1 serving.go:386] Generated self-signed cert in-memory
	W1210 06:24:27.680612       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1210 06:24:27.680651       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1210 06:24:27.680664       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1210 06:24:27.680674       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1210 06:24:27.733347       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1210 06:24:27.733383       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 06:24:27.736524       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 06:24:27.736942       1 shared_informer.go:370] "Waiting for caches to sync"
	I1210 06:24:27.736883       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1210 06:24:27.736907       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1210 06:24:27.837754       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 10 06:24:41 no-preload-713838 kubelet[714]: E1210 06:24:41.133901     714 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-p6wnm" containerName="dashboard-metrics-scraper"
	Dec 10 06:24:41 no-preload-713838 kubelet[714]: I1210 06:24:41.133933     714 scope.go:122] "RemoveContainer" containerID="34bb97e4191518cc5deb571523e1c1c72203c3e21fec3593849df7314454307c"
	Dec 10 06:24:41 no-preload-713838 kubelet[714]: E1210 06:24:41.134103     714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-p6wnm_kubernetes-dashboard(655971eb-87e1-4d59-8a87-82ae1750a6a5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-p6wnm" podUID="655971eb-87e1-4d59-8a87-82ae1750a6a5"
	Dec 10 06:24:42 no-preload-713838 kubelet[714]: E1210 06:24:42.511725     714 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-713838" containerName="kube-apiserver"
	Dec 10 06:24:43 no-preload-713838 kubelet[714]: E1210 06:24:43.138927     714 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-713838" containerName="kube-apiserver"
	Dec 10 06:24:48 no-preload-713838 kubelet[714]: E1210 06:24:48.452767     714 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-p6wnm" containerName="dashboard-metrics-scraper"
	Dec 10 06:24:48 no-preload-713838 kubelet[714]: I1210 06:24:48.452813     714 scope.go:122] "RemoveContainer" containerID="34bb97e4191518cc5deb571523e1c1c72203c3e21fec3593849df7314454307c"
	Dec 10 06:24:48 no-preload-713838 kubelet[714]: E1210 06:24:48.453052     714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-p6wnm_kubernetes-dashboard(655971eb-87e1-4d59-8a87-82ae1750a6a5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-p6wnm" podUID="655971eb-87e1-4d59-8a87-82ae1750a6a5"
	Dec 10 06:24:50 no-preload-713838 kubelet[714]: E1210 06:24:50.034661     714 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-p6wnm" containerName="dashboard-metrics-scraper"
	Dec 10 06:24:50 no-preload-713838 kubelet[714]: I1210 06:24:50.034716     714 scope.go:122] "RemoveContainer" containerID="34bb97e4191518cc5deb571523e1c1c72203c3e21fec3593849df7314454307c"
	Dec 10 06:24:50 no-preload-713838 kubelet[714]: I1210 06:24:50.160380     714 scope.go:122] "RemoveContainer" containerID="34bb97e4191518cc5deb571523e1c1c72203c3e21fec3593849df7314454307c"
	Dec 10 06:24:50 no-preload-713838 kubelet[714]: E1210 06:24:50.160809     714 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-p6wnm" containerName="dashboard-metrics-scraper"
	Dec 10 06:24:50 no-preload-713838 kubelet[714]: I1210 06:24:50.160845     714 scope.go:122] "RemoveContainer" containerID="8cf7653df43bfb019a879555ed0b3523ed1144c91d66cad14bf2f900672b3e97"
	Dec 10 06:24:50 no-preload-713838 kubelet[714]: E1210 06:24:50.161071     714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-p6wnm_kubernetes-dashboard(655971eb-87e1-4d59-8a87-82ae1750a6a5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-p6wnm" podUID="655971eb-87e1-4d59-8a87-82ae1750a6a5"
	Dec 10 06:24:58 no-preload-713838 kubelet[714]: E1210 06:24:58.452563     714 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-p6wnm" containerName="dashboard-metrics-scraper"
	Dec 10 06:24:58 no-preload-713838 kubelet[714]: I1210 06:24:58.452611     714 scope.go:122] "RemoveContainer" containerID="8cf7653df43bfb019a879555ed0b3523ed1144c91d66cad14bf2f900672b3e97"
	Dec 10 06:24:58 no-preload-713838 kubelet[714]: E1210 06:24:58.452780     714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-p6wnm_kubernetes-dashboard(655971eb-87e1-4d59-8a87-82ae1750a6a5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-p6wnm" podUID="655971eb-87e1-4d59-8a87-82ae1750a6a5"
	Dec 10 06:24:59 no-preload-713838 kubelet[714]: I1210 06:24:59.187139     714 scope.go:122] "RemoveContainer" containerID="c9af406a71bb6827180c265a5825986d235be10e8202bfcd28bd70a363cd3945"
	Dec 10 06:25:00 no-preload-713838 kubelet[714]: E1210 06:25:00.090767     714 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-hr4gk" containerName="coredns"
	Dec 10 06:25:14 no-preload-713838 kubelet[714]: E1210 06:25:14.035109     714 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-p6wnm" containerName="dashboard-metrics-scraper"
	Dec 10 06:25:14 no-preload-713838 kubelet[714]: I1210 06:25:14.035167     714 scope.go:122] "RemoveContainer" containerID="8cf7653df43bfb019a879555ed0b3523ed1144c91d66cad14bf2f900672b3e97"
	Dec 10 06:25:14 no-preload-713838 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 10 06:25:14 no-preload-713838 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 10 06:25:14 no-preload-713838 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:25:14 no-preload-713838 systemd[1]: kubelet.service: Consumed 1.779s CPU time.
	
	
	==> kubernetes-dashboard [1323310fa79d97e897f48248a3271e2d891ea27849df186d92211b9ee5b46f18] <==
	2025/12/10 06:24:35 Starting overwatch
	2025/12/10 06:24:35 Using namespace: kubernetes-dashboard
	2025/12/10 06:24:35 Using in-cluster config to connect to apiserver
	2025/12/10 06:24:35 Using secret token for csrf signing
	2025/12/10 06:24:35 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/10 06:24:35 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/10 06:24:35 Successful initial request to the apiserver, version: v1.35.0-beta.0
	2025/12/10 06:24:35 Generating JWE encryption key
	2025/12/10 06:24:35 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/10 06:24:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/10 06:24:36 Initializing JWE encryption key from synchronized object
	2025/12/10 06:24:36 Creating in-cluster Sidecar client
	2025/12/10 06:24:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/10 06:24:36 Serving insecurely on HTTP port: 9090
	2025/12/10 06:25:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [a85f655104f8de9cbb4bfafb1587d4e6d12c001b4413e2dded406cb8f4a9411a] <==
	I1210 06:24:59.252954       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1210 06:24:59.262652       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1210 06:24:59.262746       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1210 06:24:59.265238       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:25:02.721459       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:25:06.982280       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:25:10.580692       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:25:13.635082       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:25:16.657636       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:25:16.663784       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1210 06:25:16.664044       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1210 06:25:16.664162       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"13195869-7f7c-4acf-98a0-df0b10b14e40", APIVersion:"v1", ResourceVersion:"641", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-713838_7f7be3ad-6468-403a-8fa0-d0509a1443f4 became leader
	I1210 06:25:16.664228       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-713838_7f7be3ad-6468-403a-8fa0-d0509a1443f4!
	W1210 06:25:16.667090       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:25:16.670840       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1210 06:25:16.765161       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-713838_7f7be3ad-6468-403a-8fa0-d0509a1443f4!
	
	
	==> storage-provisioner [c9af406a71bb6827180c265a5825986d235be10e8202bfcd28bd70a363cd3945] <==
	I1210 06:24:28.454864       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1210 06:24:58.457734       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-713838 -n no-preload-713838
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-713838 -n no-preload-713838: exit status 2 (372.457638ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-713838 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-713838
helpers_test.go:244: (dbg) docker inspect no-preload-713838:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4a9af4b439c2bab76cdd83fb5b3fc2cdad65b17f7ccbe3c7f3909b3e503a9bb2",
	        "Created": "2025-12-10T06:22:56.695408224Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 327163,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T06:24:17.801249327Z",
	            "FinishedAt": "2025-12-10T06:24:16.706896856Z"
	        },
	        "Image": "sha256:9dfcc37acf4d8ed51daae49d651516447e95ced4bb0b0783e8c53cb79a74f008",
	        "ResolvConfPath": "/var/lib/docker/containers/4a9af4b439c2bab76cdd83fb5b3fc2cdad65b17f7ccbe3c7f3909b3e503a9bb2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4a9af4b439c2bab76cdd83fb5b3fc2cdad65b17f7ccbe3c7f3909b3e503a9bb2/hostname",
	        "HostsPath": "/var/lib/docker/containers/4a9af4b439c2bab76cdd83fb5b3fc2cdad65b17f7ccbe3c7f3909b3e503a9bb2/hosts",
	        "LogPath": "/var/lib/docker/containers/4a9af4b439c2bab76cdd83fb5b3fc2cdad65b17f7ccbe3c7f3909b3e503a9bb2/4a9af4b439c2bab76cdd83fb5b3fc2cdad65b17f7ccbe3c7f3909b3e503a9bb2-json.log",
	        "Name": "/no-preload-713838",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-713838:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-713838",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4a9af4b439c2bab76cdd83fb5b3fc2cdad65b17f7ccbe3c7f3909b3e503a9bb2",
	                "LowerDir": "/var/lib/docker/overlay2/6547a92011e88654ac2d53d62edbbe331cd1387dcdf27af48e639e84ea20cdad-init/diff:/var/lib/docker/overlay2/5745aee6e8b05b3a4cc4ad6aee891df9d6438d830895f70bd2a764a976802708/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6547a92011e88654ac2d53d62edbbe331cd1387dcdf27af48e639e84ea20cdad/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6547a92011e88654ac2d53d62edbbe331cd1387dcdf27af48e639e84ea20cdad/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6547a92011e88654ac2d53d62edbbe331cd1387dcdf27af48e639e84ea20cdad/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-713838",
	                "Source": "/var/lib/docker/volumes/no-preload-713838/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-713838",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-713838",
	                "name.minikube.sigs.k8s.io": "no-preload-713838",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "795e8ad253d7bee2f68d1d6fb5f76e044f53900c178ca76f6287af15c798f873",
	            "SandboxKey": "/var/run/docker/netns/795e8ad253d7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33120"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33123"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33121"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33122"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-713838": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8987097bf8a19a968989f80c7ad4a35d96813c7e6580ac101cba37c806b19e54",
	                    "EndpointID": "bdcc53a3bf749ba7f4501d45df356f14f95abd5410a67179fa7ca326ce698e81",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "9e:3b:de:f9:af:39",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-713838",
	                        "4a9af4b439c2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-713838 -n no-preload-713838
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-713838 -n no-preload-713838: exit status 2 (375.002819ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-713838 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-713838 logs -n 25: (1.266752054s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ delete  │ -p disable-driver-mounts-998062                                                                                                                                                                                                                      │ disable-driver-mounts-998062 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ start   │ -p default-k8s-diff-port-643991 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-643991 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:24 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-424086 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ old-k8s-version-424086       │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ stop    │ -p old-k8s-version-424086 --alsologtostderr -v=3                                                                                                                                                                                                     │ old-k8s-version-424086       │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-424086 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ old-k8s-version-424086       │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ start   │ -p old-k8s-version-424086 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-424086       │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:24 UTC │
	│ addons  │ enable metrics-server -p no-preload-713838 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-713838            │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-133470 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-133470           │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ stop    │ -p no-preload-713838 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-713838            │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:24 UTC │
	│ stop    │ -p embed-certs-133470 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-133470           │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:24 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-643991 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-643991 │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-643991 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-643991 │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:24 UTC │
	│ addons  │ enable dashboard -p no-preload-713838 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-713838            │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:24 UTC │
	│ start   │ -p no-preload-713838 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-713838            │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:25 UTC │
	│ addons  │ enable dashboard -p embed-certs-133470 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-133470           │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:24 UTC │
	│ start   │ -p embed-certs-133470 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-133470           │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:25 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-643991 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-643991 │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:24 UTC │
	│ start   │ -p default-k8s-diff-port-643991 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-643991 │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │                     │
	│ image   │ old-k8s-version-424086 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-424086       │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:24 UTC │
	│ pause   │ -p old-k8s-version-424086 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-424086       │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │                     │
	│ delete  │ -p old-k8s-version-424086                                                                                                                                                                                                                            │ old-k8s-version-424086       │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:24 UTC │
	│ delete  │ -p old-k8s-version-424086                                                                                                                                                                                                                            │ old-k8s-version-424086       │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:24 UTC │
	│ start   │ -p newest-cni-126107 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-126107            │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │                     │
	│ image   │ no-preload-713838 image list --format=json                                                                                                                                                                                                           │ no-preload-713838            │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │ 10 Dec 25 06:25 UTC │
	│ pause   │ -p no-preload-713838 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-713838            │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:24:59
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:24:59.327087  336887 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:24:59.327365  336887 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:24:59.327375  336887 out.go:374] Setting ErrFile to fd 2...
	I1210 06:24:59.327379  336887 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:24:59.327669  336887 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8832/.minikube/bin
	I1210 06:24:59.328143  336887 out.go:368] Setting JSON to false
	I1210 06:24:59.329429  336887 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4050,"bootTime":1765343849,"procs":361,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 06:24:59.329519  336887 start.go:143] virtualization: kvm guest
	I1210 06:24:59.331611  336887 out.go:179] * [newest-cni-126107] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 06:24:59.333096  336887 notify.go:221] Checking for updates...
	I1210 06:24:59.333116  336887 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 06:24:59.334447  336887 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:24:59.336068  336887 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-8832/kubeconfig
	I1210 06:24:59.337494  336887 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-8832/.minikube
	I1210 06:24:59.338960  336887 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 06:24:59.340340  336887 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:24:59.342187  336887 config.go:182] Loaded profile config "default-k8s-diff-port-643991": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 06:24:59.342330  336887 config.go:182] Loaded profile config "embed-certs-133470": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 06:24:59.342492  336887 config.go:182] Loaded profile config "no-preload-713838": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 06:24:59.342623  336887 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:24:59.369242  336887 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 06:24:59.369328  336887 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:24:59.432140  336887 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-10 06:24:59.420604919 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:24:59.432256  336887 docker.go:319] overlay module found
	I1210 06:24:59.435201  336887 out.go:179] * Using the docker driver based on user configuration
	W1210 06:24:55.887075  331193 pod_ready.go:104] pod "coredns-66bc5c9577-znsz6" is not "Ready", error: <nil>
	W1210 06:24:58.386507  331193 pod_ready.go:104] pod "coredns-66bc5c9577-znsz6" is not "Ready", error: <nil>
	I1210 06:24:59.436402  336887 start.go:309] selected driver: docker
	I1210 06:24:59.436415  336887 start.go:927] validating driver "docker" against <nil>
	I1210 06:24:59.436427  336887 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:24:59.436998  336887 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:24:59.496347  336887 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-10 06:24:59.486011226 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:24:59.496517  336887 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1210 06:24:59.496554  336887 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1210 06:24:59.496758  336887 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1210 06:24:59.499173  336887 out.go:179] * Using Docker driver with root privileges
	I1210 06:24:59.500516  336887 cni.go:84] Creating CNI manager for ""
	I1210 06:24:59.500598  336887 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:24:59.500612  336887 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1210 06:24:59.500684  336887 start.go:353] cluster config:
	{Name:newest-cni-126107 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-126107 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:24:59.502093  336887 out.go:179] * Starting "newest-cni-126107" primary control-plane node in "newest-cni-126107" cluster
	I1210 06:24:59.503450  336887 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 06:24:59.504798  336887 out.go:179] * Pulling base image v0.0.48-1765319469-22089 ...
	I1210 06:24:59.506022  336887 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1210 06:24:59.506091  336887 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-8832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1210 06:24:59.506102  336887 cache.go:65] Caching tarball of preloaded images
	I1210 06:24:59.506114  336887 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon
	I1210 06:24:59.506191  336887 preload.go:238] Found /home/jenkins/minikube-integration/22089-8832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 06:24:59.506203  336887 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1210 06:24:59.506300  336887 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/config.json ...
	I1210 06:24:59.506323  336887 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/config.json: {Name:mkdf58f074b298e370024a6ce1eb0198fc1a1932 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:24:59.529599  336887 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon, skipping pull
	I1210 06:24:59.529619  336887 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca exists in daemon, skipping load
	I1210 06:24:59.529645  336887 cache.go:243] Successfully downloaded all kic artifacts
	I1210 06:24:59.529672  336887 start.go:360] acquireMachinesLock for newest-cni-126107: {Name:mk95835e60131d01841dcfa433d5776bf10a491c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:24:59.529766  336887 start.go:364] duration metric: took 78.432µs to acquireMachinesLock for "newest-cni-126107"
	I1210 06:24:59.529787  336887 start.go:93] Provisioning new machine with config: &{Name:newest-cni-126107 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-126107 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 06:24:59.529851  336887 start.go:125] createHost starting for "" (driver="docker")
	W1210 06:24:58.946860  326955 pod_ready.go:104] pod "coredns-7d764666f9-hr4gk" is not "Ready", error: <nil>
	I1210 06:25:00.446892  326955 pod_ready.go:94] pod "coredns-7d764666f9-hr4gk" is "Ready"
	I1210 06:25:00.446917  326955 pod_ready.go:86] duration metric: took 31.006503405s for pod "coredns-7d764666f9-hr4gk" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:00.449783  326955 pod_ready.go:83] waiting for pod "etcd-no-preload-713838" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:00.454644  326955 pod_ready.go:94] pod "etcd-no-preload-713838" is "Ready"
	I1210 06:25:00.454673  326955 pod_ready.go:86] duration metric: took 4.863318ms for pod "etcd-no-preload-713838" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:00.457203  326955 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-713838" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:00.462197  326955 pod_ready.go:94] pod "kube-apiserver-no-preload-713838" is "Ready"
	I1210 06:25:00.462227  326955 pod_ready.go:86] duration metric: took 4.996726ms for pod "kube-apiserver-no-preload-713838" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:00.464859  326955 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-713838" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:00.643687  326955 pod_ready.go:94] pod "kube-controller-manager-no-preload-713838" is "Ready"
	I1210 06:25:00.643711  326955 pod_ready.go:86] duration metric: took 178.834657ms for pod "kube-controller-manager-no-preload-713838" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:00.844018  326955 pod_ready.go:83] waiting for pod "kube-proxy-c62hk" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:01.244075  326955 pod_ready.go:94] pod "kube-proxy-c62hk" is "Ready"
	I1210 06:25:01.244105  326955 pod_ready.go:86] duration metric: took 400.060427ms for pod "kube-proxy-c62hk" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:01.445041  326955 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-713838" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:01.843827  326955 pod_ready.go:94] pod "kube-scheduler-no-preload-713838" is "Ready"
	I1210 06:25:01.843854  326955 pod_ready.go:86] duration metric: took 398.788804ms for pod "kube-scheduler-no-preload-713838" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:01.843867  326955 pod_ready.go:40] duration metric: took 32.407570406s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 06:25:01.891782  326955 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1210 06:25:01.897299  326955 out.go:179] * Done! kubectl is now configured to use "no-preload-713838" cluster and "default" namespace by default
	W1210 06:25:00.080872  327833 pod_ready.go:104] pod "coredns-66bc5c9577-gw75x" is not "Ready", error: <nil>
	W1210 06:25:02.579615  327833 pod_ready.go:104] pod "coredns-66bc5c9577-gw75x" is not "Ready", error: <nil>
	I1210 06:24:59.532875  336887 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1210 06:24:59.533186  336887 start.go:159] libmachine.API.Create for "newest-cni-126107" (driver="docker")
	I1210 06:24:59.533225  336887 client.go:173] LocalClient.Create starting
	I1210 06:24:59.533327  336887 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem
	I1210 06:24:59.533388  336887 main.go:143] libmachine: Decoding PEM data...
	I1210 06:24:59.533416  336887 main.go:143] libmachine: Parsing certificate...
	I1210 06:24:59.533500  336887 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22089-8832/.minikube/certs/cert.pem
	I1210 06:24:59.533540  336887 main.go:143] libmachine: Decoding PEM data...
	I1210 06:24:59.533557  336887 main.go:143] libmachine: Parsing certificate...
	I1210 06:24:59.533982  336887 cli_runner.go:164] Run: docker network inspect newest-cni-126107 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 06:24:59.552885  336887 cli_runner.go:211] docker network inspect newest-cni-126107 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 06:24:59.552988  336887 network_create.go:284] running [docker network inspect newest-cni-126107] to gather additional debugging logs...
	I1210 06:24:59.553008  336887 cli_runner.go:164] Run: docker network inspect newest-cni-126107
	W1210 06:24:59.572451  336887 cli_runner.go:211] docker network inspect newest-cni-126107 returned with exit code 1
	I1210 06:24:59.572534  336887 network_create.go:287] error running [docker network inspect newest-cni-126107]: docker network inspect newest-cni-126107: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-126107 not found
	I1210 06:24:59.572551  336887 network_create.go:289] output of [docker network inspect newest-cni-126107]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-126107 not found
	
	** /stderr **
	I1210 06:24:59.572710  336887 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:24:59.592775  336887 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-93569dd44e03 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:22:34:6b:89:a0:37} reservation:<nil>}
	I1210 06:24:59.593342  336887 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-2fbfa5ca31a8 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:6a:30:9e:0a:da:73} reservation:<nil>}
	I1210 06:24:59.594133  336887 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-68b4fc4b224b IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:0a:d7:21:69:83} reservation:<nil>}
	I1210 06:24:59.594915  336887 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-0a24a8ad90ff IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:32:ea:e5:16:4c:6f} reservation:<nil>}
	I1210 06:24:59.595927  336887 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001dd18e0}
	I1210 06:24:59.595955  336887 network_create.go:124] attempt to create docker network newest-cni-126107 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1210 06:24:59.596007  336887 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-126107 newest-cni-126107
	I1210 06:24:59.648242  336887 network_create.go:108] docker network newest-cni-126107 192.168.85.0/24 created
	I1210 06:24:59.648276  336887 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-126107" container
	I1210 06:24:59.648334  336887 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 06:24:59.667592  336887 cli_runner.go:164] Run: docker volume create newest-cni-126107 --label name.minikube.sigs.k8s.io=newest-cni-126107 --label created_by.minikube.sigs.k8s.io=true
	I1210 06:24:59.686982  336887 oci.go:103] Successfully created a docker volume newest-cni-126107
	I1210 06:24:59.687084  336887 cli_runner.go:164] Run: docker run --rm --name newest-cni-126107-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-126107 --entrypoint /usr/bin/test -v newest-cni-126107:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca -d /var/lib
	I1210 06:25:00.115171  336887 oci.go:107] Successfully prepared a docker volume newest-cni-126107
	I1210 06:25:00.115245  336887 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1210 06:25:00.115259  336887 kic.go:194] Starting extracting preloaded images to volume ...
	I1210 06:25:00.115360  336887 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22089-8832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-126107:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca -I lz4 -xf /preloaded.tar -C /extractDir
	I1210 06:25:04.112675  336887 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22089-8832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-126107:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca -I lz4 -xf /preloaded.tar -C /extractDir: (3.997248616s)
	I1210 06:25:04.112712  336887 kic.go:203] duration metric: took 3.997449096s to extract preloaded images to volume ...
	W1210 06:25:04.112837  336887 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1210 06:25:04.112877  336887 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1210 06:25:04.112928  336887 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 06:25:04.172016  336887 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-126107 --name newest-cni-126107 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-126107 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-126107 --network newest-cni-126107 --ip 192.168.85.2 --volume newest-cni-126107:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca
	W1210 06:25:00.387118  331193 pod_ready.go:104] pod "coredns-66bc5c9577-znsz6" is not "Ready", error: <nil>
	W1210 06:25:02.917573  331193 pod_ready.go:104] pod "coredns-66bc5c9577-znsz6" is not "Ready", error: <nil>
	W1210 06:25:04.579873  327833 pod_ready.go:104] pod "coredns-66bc5c9577-gw75x" is not "Ready", error: <nil>
	W1210 06:25:06.580394  327833 pod_ready.go:104] pod "coredns-66bc5c9577-gw75x" is not "Ready", error: <nil>
	I1210 06:25:07.580576  327833 pod_ready.go:94] pod "coredns-66bc5c9577-gw75x" is "Ready"
	I1210 06:25:07.580605  327833 pod_ready.go:86] duration metric: took 37.506619554s for pod "coredns-66bc5c9577-gw75x" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:07.583509  327833 pod_ready.go:83] waiting for pod "etcd-embed-certs-133470" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:07.587865  327833 pod_ready.go:94] pod "etcd-embed-certs-133470" is "Ready"
	I1210 06:25:07.587890  327833 pod_ready.go:86] duration metric: took 4.359471ms for pod "etcd-embed-certs-133470" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:07.590170  327833 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-133470" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:07.594746  327833 pod_ready.go:94] pod "kube-apiserver-embed-certs-133470" is "Ready"
	I1210 06:25:07.594774  327833 pod_ready.go:86] duration metric: took 4.57905ms for pod "kube-apiserver-embed-certs-133470" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:07.596975  327833 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-133470" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:07.778320  327833 pod_ready.go:94] pod "kube-controller-manager-embed-certs-133470" is "Ready"
	I1210 06:25:07.778347  327833 pod_ready.go:86] duration metric: took 181.346408ms for pod "kube-controller-manager-embed-certs-133470" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:07.979006  327833 pod_ready.go:83] waiting for pod "kube-proxy-fkdk9" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:08.378607  327833 pod_ready.go:94] pod "kube-proxy-fkdk9" is "Ready"
	I1210 06:25:08.378631  327833 pod_ready.go:86] duration metric: took 399.601345ms for pod "kube-proxy-fkdk9" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:08.578014  327833 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-133470" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:08.978761  327833 pod_ready.go:94] pod "kube-scheduler-embed-certs-133470" is "Ready"
	I1210 06:25:08.978787  327833 pod_ready.go:86] duration metric: took 400.749384ms for pod "kube-scheduler-embed-certs-133470" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:08.978798  327833 pod_ready.go:40] duration metric: took 38.909473428s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 06:25:09.028286  327833 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1210 06:25:09.030218  327833 out.go:179] * Done! kubectl is now configured to use "embed-certs-133470" cluster and "default" namespace by default
	I1210 06:25:04.481386  336887 cli_runner.go:164] Run: docker container inspect newest-cni-126107 --format={{.State.Running}}
	I1210 06:25:04.502244  336887 cli_runner.go:164] Run: docker container inspect newest-cni-126107 --format={{.State.Status}}
	I1210 06:25:04.522735  336887 cli_runner.go:164] Run: docker exec newest-cni-126107 stat /var/lib/dpkg/alternatives/iptables
	I1210 06:25:04.571010  336887 oci.go:144] the created container "newest-cni-126107" has a running status.
	I1210 06:25:04.571044  336887 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22089-8832/.minikube/machines/newest-cni-126107/id_rsa...
	I1210 06:25:04.663409  336887 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22089-8832/.minikube/machines/newest-cni-126107/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 06:25:04.690550  336887 cli_runner.go:164] Run: docker container inspect newest-cni-126107 --format={{.State.Status}}
	I1210 06:25:04.713575  336887 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 06:25:04.713604  336887 kic_runner.go:114] Args: [docker exec --privileged newest-cni-126107 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 06:25:04.767064  336887 cli_runner.go:164] Run: docker container inspect newest-cni-126107 --format={{.State.Status}}
	I1210 06:25:04.791773  336887 machine.go:94] provisionDockerMachine start ...
	I1210 06:25:04.791873  336887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:04.819325  336887 main.go:143] libmachine: Using SSH client type: native
	I1210 06:25:04.819813  336887 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1210 06:25:04.819834  336887 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 06:25:04.820667  336887 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1210 06:25:07.958166  336887 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-126107
	
	I1210 06:25:07.958195  336887 ubuntu.go:182] provisioning hostname "newest-cni-126107"
	I1210 06:25:07.958260  336887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:07.980501  336887 main.go:143] libmachine: Using SSH client type: native
	I1210 06:25:07.980710  336887 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1210 06:25:07.980728  336887 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-126107 && echo "newest-cni-126107" | sudo tee /etc/hostname
	I1210 06:25:08.127040  336887 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-126107
	
	I1210 06:25:08.127128  336887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:08.147687  336887 main.go:143] libmachine: Using SSH client type: native
	I1210 06:25:08.147963  336887 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1210 06:25:08.147982  336887 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-126107' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-126107/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-126107' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 06:25:08.283513  336887 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:25:08.283545  336887 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22089-8832/.minikube CaCertPath:/home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22089-8832/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22089-8832/.minikube}
	I1210 06:25:08.283569  336887 ubuntu.go:190] setting up certificates
	I1210 06:25:08.283582  336887 provision.go:84] configureAuth start
	I1210 06:25:08.283641  336887 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-126107
	I1210 06:25:08.304777  336887 provision.go:143] copyHostCerts
	I1210 06:25:08.304859  336887 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-8832/.minikube/ca.pem, removing ...
	I1210 06:25:08.304870  336887 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-8832/.minikube/ca.pem
	I1210 06:25:08.304943  336887 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22089-8832/.minikube/ca.pem (1078 bytes)
	I1210 06:25:08.305028  336887 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-8832/.minikube/cert.pem, removing ...
	I1210 06:25:08.305036  336887 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-8832/.minikube/cert.pem
	I1210 06:25:08.305061  336887 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22089-8832/.minikube/cert.pem (1123 bytes)
	I1210 06:25:08.305130  336887 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-8832/.minikube/key.pem, removing ...
	I1210 06:25:08.305138  336887 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-8832/.minikube/key.pem
	I1210 06:25:08.305161  336887 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22089-8832/.minikube/key.pem (1675 bytes)
	I1210 06:25:08.305231  336887 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22089-8832/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca-key.pem org=jenkins.newest-cni-126107 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-126107]
	I1210 06:25:08.358046  336887 provision.go:177] copyRemoteCerts
	I1210 06:25:08.358115  336887 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 06:25:08.358153  336887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:08.378428  336887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/newest-cni-126107/id_rsa Username:docker}
	I1210 06:25:08.475365  336887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 06:25:08.497101  336887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 06:25:08.517033  336887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 06:25:08.536354  336887 provision.go:87] duration metric: took 252.752199ms to configureAuth
	I1210 06:25:08.536379  336887 ubuntu.go:206] setting minikube options for container-runtime
	I1210 06:25:08.536554  336887 config.go:182] Loaded profile config "newest-cni-126107": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 06:25:08.536656  336887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:08.556388  336887 main.go:143] libmachine: Using SSH client type: native
	I1210 06:25:08.556749  336887 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1210 06:25:08.556781  336887 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 06:25:08.835275  336887 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 06:25:08.835301  336887 machine.go:97] duration metric: took 4.043503325s to provisionDockerMachine
	I1210 06:25:08.835313  336887 client.go:176] duration metric: took 9.302078213s to LocalClient.Create
	I1210 06:25:08.835335  336887 start.go:167] duration metric: took 9.302149263s to libmachine.API.Create "newest-cni-126107"
	I1210 06:25:08.835345  336887 start.go:293] postStartSetup for "newest-cni-126107" (driver="docker")
	I1210 06:25:08.835361  336887 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 06:25:08.835432  336887 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 06:25:08.835497  336887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:08.855854  336887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/newest-cni-126107/id_rsa Username:docker}
	I1210 06:25:08.956961  336887 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 06:25:08.961167  336887 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 06:25:08.961201  336887 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 06:25:08.961213  336887 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-8832/.minikube/addons for local assets ...
	I1210 06:25:08.961271  336887 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-8832/.minikube/files for local assets ...
	I1210 06:25:08.961344  336887 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-8832/.minikube/files/etc/ssl/certs/123742.pem -> 123742.pem in /etc/ssl/certs
	I1210 06:25:08.961433  336887 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 06:25:08.970695  336887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/files/etc/ssl/certs/123742.pem --> /etc/ssl/certs/123742.pem (1708 bytes)
	I1210 06:25:08.995442  336887 start.go:296] duration metric: took 160.082878ms for postStartSetup
	I1210 06:25:08.995880  336887 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-126107
	I1210 06:25:09.016559  336887 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/config.json ...
	I1210 06:25:09.016908  336887 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:25:09.016964  336887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:09.038838  336887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/newest-cni-126107/id_rsa Username:docker}
	I1210 06:25:09.139907  336887 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 06:25:09.145902  336887 start.go:128] duration metric: took 9.616033039s to createHost
	I1210 06:25:09.145930  336887 start.go:83] releasing machines lock for "newest-cni-126107", held for 9.616152275s
	I1210 06:25:09.146007  336887 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-126107
	I1210 06:25:09.166587  336887 ssh_runner.go:195] Run: cat /version.json
	I1210 06:25:09.166650  336887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:09.166669  336887 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 06:25:09.166759  336887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:09.189521  336887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/newest-cni-126107/id_rsa Username:docker}
	I1210 06:25:09.189525  336887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/newest-cni-126107/id_rsa Username:docker}
	I1210 06:25:09.284007  336887 ssh_runner.go:195] Run: systemctl --version
	W1210 06:25:05.386403  331193 pod_ready.go:104] pod "coredns-66bc5c9577-znsz6" is not "Ready", error: <nil>
	W1210 06:25:07.387202  331193 pod_ready.go:104] pod "coredns-66bc5c9577-znsz6" is not "Ready", error: <nil>
	W1210 06:25:09.387389  331193 pod_ready.go:104] pod "coredns-66bc5c9577-znsz6" is not "Ready", error: <nil>
	I1210 06:25:09.351948  336887 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 06:25:09.392017  336887 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 06:25:09.397100  336887 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 06:25:09.397159  336887 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 06:25:09.426437  336887 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 06:25:09.426486  336887 start.go:496] detecting cgroup driver to use...
	I1210 06:25:09.426524  336887 detect.go:190] detected "systemd" cgroup driver on host os
	I1210 06:25:09.426570  336887 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 06:25:09.444100  336887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 06:25:09.457503  336887 docker.go:218] disabling cri-docker service (if available) ...
	I1210 06:25:09.457569  336887 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 06:25:09.475303  336887 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 06:25:09.495265  336887 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 06:25:09.584209  336887 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 06:25:09.673201  336887 docker.go:234] disabling docker service ...
	I1210 06:25:09.673262  336887 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 06:25:09.692964  336887 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 06:25:09.706562  336887 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 06:25:09.794361  336887 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 06:25:09.886009  336887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 06:25:09.899964  336887 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:25:09.915638  336887 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 06:25:09.915690  336887 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:25:09.927534  336887 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1210 06:25:09.927591  336887 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:25:09.937774  336887 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:25:09.947722  336887 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:25:09.957780  336887 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 06:25:09.967038  336887 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:25:09.977926  336887 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:25:09.993658  336887 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:25:10.003638  336887 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 06:25:10.012100  336887 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 06:25:10.021305  336887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:25:10.110274  336887 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 06:25:10.246619  336887 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 06:25:10.246690  336887 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 06:25:10.251096  336887 start.go:564] Will wait 60s for crictl version
	I1210 06:25:10.251165  336887 ssh_runner.go:195] Run: which crictl
	I1210 06:25:10.255306  336887 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 06:25:10.283066  336887 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 06:25:10.283157  336887 ssh_runner.go:195] Run: crio --version
	I1210 06:25:10.313027  336887 ssh_runner.go:195] Run: crio --version
	I1210 06:25:10.346493  336887 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1210 06:25:10.348155  336887 cli_runner.go:164] Run: docker network inspect newest-cni-126107 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:25:10.367398  336887 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1210 06:25:10.371843  336887 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:25:10.385684  336887 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1210 06:25:10.387117  336887 kubeadm.go:884] updating cluster {Name:newest-cni-126107 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-126107 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 06:25:10.387245  336887 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1210 06:25:10.387300  336887 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:25:10.421783  336887 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 06:25:10.421805  336887 crio.go:433] Images already preloaded, skipping extraction
	I1210 06:25:10.421852  336887 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:25:10.448367  336887 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 06:25:10.448389  336887 cache_images.go:86] Images are preloaded, skipping loading
	I1210 06:25:10.448395  336887 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 crio true true} ...
	I1210 06:25:10.448494  336887 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-126107 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-126107 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 06:25:10.448573  336887 ssh_runner.go:195] Run: crio config
	I1210 06:25:10.498037  336887 cni.go:84] Creating CNI manager for ""
	I1210 06:25:10.498063  336887 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:25:10.498081  336887 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1210 06:25:10.498120  336887 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-126107 NodeName:newest-cni-126107 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 06:25:10.498246  336887 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-126107"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 06:25:10.498306  336887 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1210 06:25:10.507229  336887 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 06:25:10.507302  336887 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 06:25:10.516385  336887 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1210 06:25:10.530854  336887 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1210 06:25:10.548260  336887 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1210 06:25:10.563281  336887 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1210 06:25:10.567436  336887 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:25:10.578747  336887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:25:10.660880  336887 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:25:10.688248  336887 certs.go:69] Setting up /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107 for IP: 192.168.85.2
	I1210 06:25:10.688268  336887 certs.go:195] generating shared ca certs ...
	I1210 06:25:10.688286  336887 certs.go:227] acquiring lock for ca certs: {Name:mkfe434cecfa5233603e8d01fb39a21abb4f8ca4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:25:10.688431  336887 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22089-8832/.minikube/ca.key
	I1210 06:25:10.688526  336887 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22089-8832/.minikube/proxy-client-ca.key
	I1210 06:25:10.688544  336887 certs.go:257] generating profile certs ...
	I1210 06:25:10.688612  336887 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/client.key
	I1210 06:25:10.688636  336887 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/client.crt with IP's: []
	I1210 06:25:10.813463  336887 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/client.crt ...
	I1210 06:25:10.813530  336887 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/client.crt: {Name:mk7009f3bf80c2397e5ae6cdebdca2735a7f7b63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:25:10.813756  336887 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/client.key ...
	I1210 06:25:10.813772  336887 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/client.key: {Name:mk6d255207a819b82a749c48b0009054007ff91d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:25:10.813864  336887 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/apiserver.key.23b909bf
	I1210 06:25:10.813882  336887 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/apiserver.crt.23b909bf with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1210 06:25:11.022417  336887 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/apiserver.crt.23b909bf ...
	I1210 06:25:11.022443  336887 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/apiserver.crt.23b909bf: {Name:mk09a2e21f902ac4eed926780c1f90cb426b5a2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:25:11.022619  336887 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/apiserver.key.23b909bf ...
	I1210 06:25:11.022632  336887 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/apiserver.key.23b909bf: {Name:mkc73ed6c35fb6a21244daf518e5b2d0a7440a3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:25:11.022704  336887 certs.go:382] copying /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/apiserver.crt.23b909bf -> /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/apiserver.crt
	I1210 06:25:11.022778  336887 certs.go:386] copying /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/apiserver.key.23b909bf -> /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/apiserver.key
	I1210 06:25:11.022831  336887 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/proxy-client.key
	I1210 06:25:11.022848  336887 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/proxy-client.crt with IP's: []
	I1210 06:25:11.088507  336887 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/proxy-client.crt ...
	I1210 06:25:11.088534  336887 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/proxy-client.crt: {Name:mkdd3c9abbfeb78fdbbafdaf53f324a4a2e625ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:25:11.088686  336887 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/proxy-client.key ...
	I1210 06:25:11.088699  336887 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/proxy-client.key: {Name:mkd22ad5ae4429236c87cce8641338a9393df47a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:25:11.088869  336887 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/12374.pem (1338 bytes)
	W1210 06:25:11.088906  336887 certs.go:480] ignoring /home/jenkins/minikube-integration/22089-8832/.minikube/certs/12374_empty.pem, impossibly tiny 0 bytes
	I1210 06:25:11.088917  336887 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca-key.pem (1679 bytes)
	I1210 06:25:11.088939  336887 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem (1078 bytes)
	I1210 06:25:11.088963  336887 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/cert.pem (1123 bytes)
	I1210 06:25:11.088988  336887 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/key.pem (1675 bytes)
	I1210 06:25:11.089034  336887 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/files/etc/ssl/certs/123742.pem (1708 bytes)
	I1210 06:25:11.089621  336887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 06:25:11.108552  336887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 06:25:11.127416  336887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 06:25:11.146079  336887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 06:25:11.164732  336887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 06:25:11.183864  336887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 06:25:11.202457  336887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 06:25:11.221380  336887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 06:25:11.241165  336887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/files/etc/ssl/certs/123742.pem --> /usr/share/ca-certificates/123742.pem (1708 bytes)
	I1210 06:25:11.262201  336887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 06:25:11.282304  336887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/certs/12374.pem --> /usr/share/ca-certificates/12374.pem (1338 bytes)
	I1210 06:25:11.302104  336887 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 06:25:11.316208  336887 ssh_runner.go:195] Run: openssl version
	I1210 06:25:11.323011  336887 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12374.pem
	I1210 06:25:11.331150  336887 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12374.pem /etc/ssl/certs/12374.pem
	I1210 06:25:11.339353  336887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12374.pem
	I1210 06:25:11.343453  336887 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:52 /usr/share/ca-certificates/12374.pem
	I1210 06:25:11.343539  336887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12374.pem
	I1210 06:25:11.378191  336887 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 06:25:11.387532  336887 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/12374.pem /etc/ssl/certs/51391683.0
	I1210 06:25:11.395709  336887 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/123742.pem
	I1210 06:25:11.403915  336887 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/123742.pem /etc/ssl/certs/123742.pem
	I1210 06:25:11.413083  336887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/123742.pem
	I1210 06:25:11.417256  336887 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:52 /usr/share/ca-certificates/123742.pem
	I1210 06:25:11.417315  336887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/123742.pem
	I1210 06:25:11.452744  336887 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 06:25:11.460975  336887 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/123742.pem /etc/ssl/certs/3ec20f2e.0
	I1210 06:25:11.468848  336887 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:25:11.477072  336887 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 06:25:11.485572  336887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:25:11.490083  336887 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:25:11.490144  336887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:25:11.529873  336887 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 06:25:11.538675  336887 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 06:25:11.547942  336887 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:25:11.552437  336887 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 06:25:11.552529  336887 kubeadm.go:401] StartCluster: {Name:newest-cni-126107 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-126107 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:25:11.552617  336887 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 06:25:11.552673  336887 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:25:11.582819  336887 cri.go:89] found id: ""
	I1210 06:25:11.582893  336887 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 06:25:11.591576  336887 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 06:25:11.600085  336887 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 06:25:11.600143  336887 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 06:25:11.608700  336887 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 06:25:11.608723  336887 kubeadm.go:158] found existing configuration files:
	
	I1210 06:25:11.608773  336887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 06:25:11.617207  336887 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 06:25:11.617265  336887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 06:25:11.625691  336887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 06:25:11.634058  336887 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 06:25:11.634138  336887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 06:25:11.642174  336887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 06:25:11.650696  336887 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 06:25:11.650751  336887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 06:25:11.658854  336887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 06:25:11.667261  336887 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 06:25:11.667309  336887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 06:25:11.675445  336887 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 06:25:11.717793  336887 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1210 06:25:11.717857  336887 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 06:25:11.787773  336887 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 06:25:11.787862  336887 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1210 06:25:11.787918  336887 kubeadm.go:319] OS: Linux
	I1210 06:25:11.788013  336887 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 06:25:11.788088  336887 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 06:25:11.788209  336887 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 06:25:11.788287  336887 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 06:25:11.788329  336887 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 06:25:11.788400  336887 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 06:25:11.788501  336887 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 06:25:11.788573  336887 kubeadm.go:319] CGROUPS_IO: enabled
	I1210 06:25:11.851680  336887 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 06:25:11.851818  336887 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 06:25:11.851989  336887 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 06:25:11.859860  336887 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 06:25:11.863122  336887 out.go:252]   - Generating certificates and keys ...
	I1210 06:25:11.863226  336887 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 06:25:11.863328  336887 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 06:25:11.994891  336887 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 06:25:12.216319  336887 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 06:25:12.263074  336887 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 06:25:12.317348  336887 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 06:25:12.348525  336887 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 06:25:12.348673  336887 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-126107] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1210 06:25:12.453542  336887 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 06:25:12.453734  336887 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-126107] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1210 06:25:12.554979  336887 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 06:25:12.639691  336887 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 06:25:12.675769  336887 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 06:25:12.675887  336887 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 06:25:12.733954  336887 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 06:25:12.762974  336887 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 06:25:12.895579  336887 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 06:25:12.968568  336887 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 06:25:13.242877  336887 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 06:25:13.243493  336887 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 06:25:13.247727  336887 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 06:25:13.249454  336887 out.go:252]   - Booting up control plane ...
	I1210 06:25:13.249584  336887 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 06:25:13.249689  336887 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 06:25:13.249772  336887 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 06:25:13.266130  336887 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 06:25:13.266243  336887 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 06:25:13.273740  336887 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 06:25:13.274070  336887 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 06:25:13.274119  336887 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 06:25:13.387904  336887 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 06:25:13.388113  336887 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 06:25:13.888860  336887 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 500.995328ms
	I1210 06:25:13.892049  336887 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1210 06:25:13.892166  336887 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1210 06:25:13.892313  336887 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1210 06:25:13.892419  336887 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1210 06:25:11.887626  331193 pod_ready.go:104] pod "coredns-66bc5c9577-znsz6" is not "Ready", error: <nil>
	W1210 06:25:14.389916  331193 pod_ready.go:104] pod "coredns-66bc5c9577-znsz6" is not "Ready", error: <nil>
	I1210 06:25:14.896145  336887 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.004021858s
	I1210 06:25:16.123662  336887 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.231275136s
	I1210 06:25:17.894620  336887 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.00240365s
	I1210 06:25:17.919519  336887 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 06:25:17.933110  336887 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 06:25:17.946133  336887 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 06:25:17.946406  336887 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-126107 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 06:25:17.956662  336887 kubeadm.go:319] [bootstrap-token] Using token: x794l4.dwxrqyazh7co8i2b
	
	
	==> CRI-O <==
	Dec 10 06:24:50 no-preload-713838 crio[564]: time="2025-12-10T06:24:50.052041074Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:24:50 no-preload-713838 crio[564]: time="2025-12-10T06:24:50.052914709Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:24:50 no-preload-713838 crio[564]: time="2025-12-10T06:24:50.098987069Z" level=info msg="Created container 8cf7653df43bfb019a879555ed0b3523ed1144c91d66cad14bf2f900672b3e97: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-p6wnm/dashboard-metrics-scraper" id=cba31ee2-2f19-4ad6-8f9b-39e560948695 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:24:50 no-preload-713838 crio[564]: time="2025-12-10T06:24:50.099801464Z" level=info msg="Starting container: 8cf7653df43bfb019a879555ed0b3523ed1144c91d66cad14bf2f900672b3e97" id=42b90fa6-0ab3-4837-a72c-c6b50c50d1cd name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 06:24:50 no-preload-713838 crio[564]: time="2025-12-10T06:24:50.102261609Z" level=info msg="Started container" PID=1748 containerID=8cf7653df43bfb019a879555ed0b3523ed1144c91d66cad14bf2f900672b3e97 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-p6wnm/dashboard-metrics-scraper id=42b90fa6-0ab3-4837-a72c-c6b50c50d1cd name=/runtime.v1.RuntimeService/StartContainer sandboxID=345910942c6d4641cddc49fd871bbcb7437009051a237bb21a21a4e592bd736a
	Dec 10 06:24:50 no-preload-713838 crio[564]: time="2025-12-10T06:24:50.163243895Z" level=info msg="Removing container: 34bb97e4191518cc5deb571523e1c1c72203c3e21fec3593849df7314454307c" id=6a605486-f1b6-47a5-b59a-1fdb229a5d76 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 10 06:24:50 no-preload-713838 crio[564]: time="2025-12-10T06:24:50.180230637Z" level=info msg="Removed container 34bb97e4191518cc5deb571523e1c1c72203c3e21fec3593849df7314454307c: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-p6wnm/dashboard-metrics-scraper" id=6a605486-f1b6-47a5-b59a-1fdb229a5d76 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 10 06:24:59 no-preload-713838 crio[564]: time="2025-12-10T06:24:59.187657225Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=b6e54868-f385-42ca-9935-34079c95d055 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:24:59 no-preload-713838 crio[564]: time="2025-12-10T06:24:59.188724273Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=810cc2d5-3ff7-4298-9139-82383ea03f72 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:24:59 no-preload-713838 crio[564]: time="2025-12-10T06:24:59.189790066Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=1bd0eeee-7bb9-4613-bdc4-105436abd31b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:24:59 no-preload-713838 crio[564]: time="2025-12-10T06:24:59.189919119Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:24:59 no-preload-713838 crio[564]: time="2025-12-10T06:24:59.194943362Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:24:59 no-preload-713838 crio[564]: time="2025-12-10T06:24:59.195129843Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/750129e30dbd7296173f32bee2306c7fb3d89d60f6312e401c95b6e52558e5e5/merged/etc/passwd: no such file or directory"
	Dec 10 06:24:59 no-preload-713838 crio[564]: time="2025-12-10T06:24:59.19516514Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/750129e30dbd7296173f32bee2306c7fb3d89d60f6312e401c95b6e52558e5e5/merged/etc/group: no such file or directory"
	Dec 10 06:24:59 no-preload-713838 crio[564]: time="2025-12-10T06:24:59.195461868Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:24:59 no-preload-713838 crio[564]: time="2025-12-10T06:24:59.234027958Z" level=info msg="Created container a85f655104f8de9cbb4bfafb1587d4e6d12c001b4413e2dded406cb8f4a9411a: kube-system/storage-provisioner/storage-provisioner" id=1bd0eeee-7bb9-4613-bdc4-105436abd31b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:24:59 no-preload-713838 crio[564]: time="2025-12-10T06:24:59.234869567Z" level=info msg="Starting container: a85f655104f8de9cbb4bfafb1587d4e6d12c001b4413e2dded406cb8f4a9411a" id=023b35c5-7e46-4276-9597-3af59bb2e1ac name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 06:24:59 no-preload-713838 crio[564]: time="2025-12-10T06:24:59.236948399Z" level=info msg="Started container" PID=1762 containerID=a85f655104f8de9cbb4bfafb1587d4e6d12c001b4413e2dded406cb8f4a9411a description=kube-system/storage-provisioner/storage-provisioner id=023b35c5-7e46-4276-9597-3af59bb2e1ac name=/runtime.v1.RuntimeService/StartContainer sandboxID=f1db7a88a9cb146f8d3e1d996e0f11efeba57fc7e08ce35bb34bfb350475a724
	Dec 10 06:25:14 no-preload-713838 crio[564]: time="2025-12-10T06:25:14.035942199Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=4a0471d0-7373-4fae-bbba-d4d28dbb39f2 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:25:14 no-preload-713838 crio[564]: time="2025-12-10T06:25:14.037037757Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=3323b6dd-ebd0-4845-9e48-13668b37f81e name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:25:14 no-preload-713838 crio[564]: time="2025-12-10T06:25:14.038141408Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-p6wnm/dashboard-metrics-scraper" id=762ee4cc-469e-4d15-96b3-3b3d053705be name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:25:14 no-preload-713838 crio[564]: time="2025-12-10T06:25:14.038281125Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:25:14 no-preload-713838 crio[564]: time="2025-12-10T06:25:14.044992817Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:25:14 no-preload-713838 crio[564]: time="2025-12-10T06:25:14.045643056Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:25:14 no-preload-713838 crio[564]: time="2025-12-10T06:25:14.098319735Z" level=info msg="CreateCtr: context was either canceled or the deadline was exceeded: context canceled" id=762ee4cc-469e-4d15-96b3-3b3d053705be name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	a85f655104f8d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           19 seconds ago      Running             storage-provisioner         1                   f1db7a88a9cb1       storage-provisioner                          kube-system
	8cf7653df43bf       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           29 seconds ago      Exited              dashboard-metrics-scraper   2                   345910942c6d4       dashboard-metrics-scraper-867fb5f87b-p6wnm   kubernetes-dashboard
	1323310fa79d9       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   43 seconds ago      Running             kubernetes-dashboard        0                   9082eef2bf85b       kubernetes-dashboard-b84665fb8-5pf6p         kubernetes-dashboard
	b8b7bbd3a73fd       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           50 seconds ago      Running             coredns                     0                   58ef475d91786       coredns-7d764666f9-hr4gk                     kube-system
	8a59823d5d04a       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           50 seconds ago      Running             busybox                     1                   308e18b34d485       busybox                                      default
	c9af406a71bb6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           50 seconds ago      Exited              storage-provisioner         0                   f1db7a88a9cb1       storage-provisioner                          kube-system
	8b0ce37d641b8       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           50 seconds ago      Running             kindnet-cni                 0                   298863928bd45       kindnet-28s4q                                kube-system
	1674e78c0eb17       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                           50 seconds ago      Running             kube-proxy                  0                   e8ef6f46aa6cb       kube-proxy-c62hk                             kube-system
	7a81e637bcbb8       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b                                           53 seconds ago      Running             kube-apiserver              0                   256e00e0c17c6       kube-apiserver-no-preload-713838             kube-system
	626d40c34f5a9       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           53 seconds ago      Running             etcd                        0                   be572d6cd6b9d       etcd-no-preload-713838                       kube-system
	7e3a6ab1e6a60       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                           53 seconds ago      Running             kube-scheduler              0                   303ae97049cc8       kube-scheduler-no-preload-713838             kube-system
	352c8f0e348fa       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                           53 seconds ago      Running             kube-controller-manager     0                   ffe78e4cd7ce3       kube-controller-manager-no-preload-713838    kube-system
	
	
	==> coredns [b8b7bbd3a73fd38688e69778eb82aaaf4c797868eabe1e3f152394098b131417] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:51330 - 49313 "HINFO IN 6347578993037087190.1957112465553760425. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.060421288s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               no-preload-713838
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-713838
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=edc6abd3c0573b88c7a02dc35aa0b985627fa3e9
	                    minikube.k8s.io/name=no-preload-713838
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T06_23_27_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 06:23:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-713838
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 06:25:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 06:24:58 +0000   Wed, 10 Dec 2025 06:23:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 06:24:58 +0000   Wed, 10 Dec 2025 06:23:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 06:24:58 +0000   Wed, 10 Dec 2025 06:23:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Dec 2025 06:24:58 +0000   Wed, 10 Dec 2025 06:23:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    no-preload-713838
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 0992b7e47f4f804d2f02c3066938a460
	  System UUID:                a0db2673-3e21-49dd-84c2-7b2766bdcea4
	  Boot ID:                    cce7104c-1270-4b6b-af66-b04ce0de633c
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-7d764666f9-hr4gk                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     107s
	  kube-system                 etcd-no-preload-713838                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         114s
	  kube-system                 kindnet-28s4q                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      107s
	  kube-system                 kube-apiserver-no-preload-713838              250m (3%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-controller-manager-no-preload-713838     200m (2%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-proxy-c62hk                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-scheduler-no-preload-713838              100m (1%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-p6wnm    0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-5pf6p          0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  109s  node-controller  Node no-preload-713838 event: Registered Node no-preload-713838 in Controller
	  Normal  RegisteredNode  49s   node-controller  Node no-preload-713838 event: Registered Node no-preload-713838 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 67 21 4d 5d 12 08 06
	[Dec10 06:21] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 7e b1 cc cb 4a c1 08 06
	[  +0.000451] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f6 67 21 4d 5d 12 08 06
	[ +47.984386] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 4e cf 84 e2 64 08 06
	[  +1.136322] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000001] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0e cf a5 c8 c4 7c 08 06
	[  +0.000001] ll header: 00000000: ff ff ff ff ff ff 4e 6d f9 62 31 99 08 06
	[Dec10 06:22] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 16 a6 95 b8 c2 8a 08 06
	[ +10.598490] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 42 35 90 e5 6e e9 08 06
	[  +0.000401] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 36 4e cf 84 e2 64 08 06
	[ +28.872835] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a 53 b5 51 38 03 08 06
	[  +0.000413] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4e 6d f9 62 31 99 08 06
	[  +9.820727] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6e c5 0b 85 ba 10 08 06
	[  +0.000485] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 16 a6 95 b8 c2 8a 08 06
	
	
	==> etcd [626d40c34f5a9cc949c8b0c13c01036e5fa575714b2210e614b23214089d41e2] <==
	{"level":"warn","ts":"2025-12-10T06:24:26.838019Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:26.847442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:26.864775Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:26.874766Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:26.887002Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:26.905202Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:26.914764Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:26.925295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:26.941670Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:26.949650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:26.959857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:26.970881Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:26.981333Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:26.992443Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:27.001710Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:27.011633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:27.021017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:27.029708Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:27.039140Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:27.052068Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:27.061018Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:27.076065Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:27.085410Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:27.094813Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:27.104420Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59866","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 06:25:19 up  1:07,  0 user,  load average: 4.79, 4.82, 3.04
	Linux no-preload-713838 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8b0ce37d641b86571cb5fe3e7bea6acb3968e201d71d2f8e58691e954136608d] <==
	I1210 06:24:28.656036       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1210 06:24:28.658541       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1210 06:24:28.658768       1 main.go:148] setting mtu 1500 for CNI 
	I1210 06:24:28.658795       1 main.go:178] kindnetd IP family: "ipv4"
	I1210 06:24:28.658824       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-10T06:24:28Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1210 06:24:28.953435       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1210 06:24:28.953523       1 controller.go:381] "Waiting for informer caches to sync"
	I1210 06:24:28.953541       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1210 06:24:28.954620       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1210 06:24:29.354522       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1210 06:24:29.354562       1 metrics.go:72] Registering metrics
	I1210 06:24:29.354639       1 controller.go:711] "Syncing nftables rules"
	I1210 06:24:38.953909       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1210 06:24:38.953968       1 main.go:301] handling current node
	I1210 06:24:48.953350       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1210 06:24:48.953403       1 main.go:301] handling current node
	I1210 06:24:58.953909       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1210 06:24:58.953949       1 main.go:301] handling current node
	I1210 06:25:08.960586       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1210 06:25:08.960620       1 main.go:301] handling current node
	I1210 06:25:18.962569       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1210 06:25:18.962614       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7a81e637bcbb822a79d3c9d17ceb44a480481cefb6fd1bd5c4f5c51620d65578] <==
	I1210 06:24:27.721163       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1210 06:24:27.721909       1 aggregator.go:187] initial CRD sync complete...
	I1210 06:24:27.722917       1 autoregister_controller.go:144] Starting autoregister controller
	I1210 06:24:27.722968       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1210 06:24:27.722993       1 cache.go:39] Caches are synced for autoregister controller
	I1210 06:24:27.721100       1 shared_informer.go:377] "Caches are synced"
	I1210 06:24:27.721090       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1210 06:24:27.725492       1 shared_informer.go:377] "Caches are synced"
	I1210 06:24:27.733188       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1210 06:24:27.737645       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1210 06:24:27.754603       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1210 06:24:27.758331       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 06:24:28.132544       1 controller.go:667] quota admission added evaluator for: namespaces
	I1210 06:24:28.159751       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1210 06:24:28.187201       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1210 06:24:28.214122       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1210 06:24:28.226609       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1210 06:24:28.281062       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.242.60"}
	I1210 06:24:28.302862       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.26.231"}
	I1210 06:24:28.623340       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1210 06:24:31.331071       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1210 06:24:31.479852       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1210 06:24:31.479852       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1210 06:24:31.529421       1 controller.go:667] quota admission added evaluator for: endpoints
	I1210 06:24:31.529421       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [352c8f0e348fa006abc84878109c4605c54ea03f96f88b48143b6b659f4b95cb] <==
	I1210 06:24:30.889838       1 shared_informer.go:377] "Caches are synced"
	I1210 06:24:30.889853       1 shared_informer.go:377] "Caches are synced"
	I1210 06:24:30.889549       1 shared_informer.go:377] "Caches are synced"
	I1210 06:24:30.889989       1 shared_informer.go:377] "Caches are synced"
	I1210 06:24:30.890028       1 shared_informer.go:377] "Caches are synced"
	I1210 06:24:30.890044       1 shared_informer.go:377] "Caches are synced"
	I1210 06:24:30.890161       1 shared_informer.go:377] "Caches are synced"
	I1210 06:24:30.890197       1 shared_informer.go:377] "Caches are synced"
	I1210 06:24:30.890200       1 shared_informer.go:377] "Caches are synced"
	I1210 06:24:30.890350       1 shared_informer.go:377] "Caches are synced"
	I1210 06:24:30.890652       1 shared_informer.go:377] "Caches are synced"
	I1210 06:24:30.890689       1 shared_informer.go:377] "Caches are synced"
	I1210 06:24:30.891301       1 shared_informer.go:377] "Caches are synced"
	I1210 06:24:30.891565       1 shared_informer.go:377] "Caches are synced"
	I1210 06:24:30.889260       1 shared_informer.go:377] "Caches are synced"
	I1210 06:24:30.889675       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1210 06:24:30.892059       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="no-preload-713838"
	I1210 06:24:30.892127       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1210 06:24:30.893717       1 shared_informer.go:377] "Caches are synced"
	I1210 06:24:30.896744       1 shared_informer.go:370] "Waiting for caches to sync"
	I1210 06:24:30.900447       1 shared_informer.go:377] "Caches are synced"
	I1210 06:24:30.988382       1 shared_informer.go:377] "Caches are synced"
	I1210 06:24:30.988407       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1210 06:24:30.988414       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1210 06:24:30.997525       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [1674e78c0eb170c33491a539a64875481799478be07a401ad67fa61986708cd8] <==
	I1210 06:24:28.484406       1 server_linux.go:53] "Using iptables proxy"
	I1210 06:24:28.568226       1 shared_informer.go:370] "Waiting for caches to sync"
	I1210 06:24:28.668904       1 shared_informer.go:377] "Caches are synced"
	I1210 06:24:28.668951       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1210 06:24:28.669050       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 06:24:28.741139       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1210 06:24:28.741211       1 server_linux.go:136] "Using iptables Proxier"
	I1210 06:24:28.748007       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 06:24:28.748503       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1210 06:24:28.748585       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 06:24:28.750267       1 config.go:200] "Starting service config controller"
	I1210 06:24:28.750288       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 06:24:28.750397       1 config.go:106] "Starting endpoint slice config controller"
	I1210 06:24:28.750409       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 06:24:28.750455       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 06:24:28.750461       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 06:24:28.750465       1 config.go:309] "Starting node config controller"
	I1210 06:24:28.750514       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 06:24:28.750521       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 06:24:28.850682       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1210 06:24:28.850826       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1210 06:24:28.852672       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [7e3a6ab1e6a60502ba96ff6d0a9a8e22ac37e396772330314f4ad7f55de8b26b] <==
	I1210 06:24:25.821099       1 serving.go:386] Generated self-signed cert in-memory
	W1210 06:24:27.680612       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1210 06:24:27.680651       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1210 06:24:27.680664       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1210 06:24:27.680674       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1210 06:24:27.733347       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1210 06:24:27.733383       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 06:24:27.736524       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 06:24:27.736942       1 shared_informer.go:370] "Waiting for caches to sync"
	I1210 06:24:27.736883       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1210 06:24:27.736907       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1210 06:24:27.837754       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 10 06:24:41 no-preload-713838 kubelet[714]: E1210 06:24:41.133901     714 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-p6wnm" containerName="dashboard-metrics-scraper"
	Dec 10 06:24:41 no-preload-713838 kubelet[714]: I1210 06:24:41.133933     714 scope.go:122] "RemoveContainer" containerID="34bb97e4191518cc5deb571523e1c1c72203c3e21fec3593849df7314454307c"
	Dec 10 06:24:41 no-preload-713838 kubelet[714]: E1210 06:24:41.134103     714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-p6wnm_kubernetes-dashboard(655971eb-87e1-4d59-8a87-82ae1750a6a5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-p6wnm" podUID="655971eb-87e1-4d59-8a87-82ae1750a6a5"
	Dec 10 06:24:42 no-preload-713838 kubelet[714]: E1210 06:24:42.511725     714 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-713838" containerName="kube-apiserver"
	Dec 10 06:24:43 no-preload-713838 kubelet[714]: E1210 06:24:43.138927     714 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-713838" containerName="kube-apiserver"
	Dec 10 06:24:48 no-preload-713838 kubelet[714]: E1210 06:24:48.452767     714 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-p6wnm" containerName="dashboard-metrics-scraper"
	Dec 10 06:24:48 no-preload-713838 kubelet[714]: I1210 06:24:48.452813     714 scope.go:122] "RemoveContainer" containerID="34bb97e4191518cc5deb571523e1c1c72203c3e21fec3593849df7314454307c"
	Dec 10 06:24:48 no-preload-713838 kubelet[714]: E1210 06:24:48.453052     714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-p6wnm_kubernetes-dashboard(655971eb-87e1-4d59-8a87-82ae1750a6a5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-p6wnm" podUID="655971eb-87e1-4d59-8a87-82ae1750a6a5"
	Dec 10 06:24:50 no-preload-713838 kubelet[714]: E1210 06:24:50.034661     714 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-p6wnm" containerName="dashboard-metrics-scraper"
	Dec 10 06:24:50 no-preload-713838 kubelet[714]: I1210 06:24:50.034716     714 scope.go:122] "RemoveContainer" containerID="34bb97e4191518cc5deb571523e1c1c72203c3e21fec3593849df7314454307c"
	Dec 10 06:24:50 no-preload-713838 kubelet[714]: I1210 06:24:50.160380     714 scope.go:122] "RemoveContainer" containerID="34bb97e4191518cc5deb571523e1c1c72203c3e21fec3593849df7314454307c"
	Dec 10 06:24:50 no-preload-713838 kubelet[714]: E1210 06:24:50.160809     714 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-p6wnm" containerName="dashboard-metrics-scraper"
	Dec 10 06:24:50 no-preload-713838 kubelet[714]: I1210 06:24:50.160845     714 scope.go:122] "RemoveContainer" containerID="8cf7653df43bfb019a879555ed0b3523ed1144c91d66cad14bf2f900672b3e97"
	Dec 10 06:24:50 no-preload-713838 kubelet[714]: E1210 06:24:50.161071     714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-p6wnm_kubernetes-dashboard(655971eb-87e1-4d59-8a87-82ae1750a6a5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-p6wnm" podUID="655971eb-87e1-4d59-8a87-82ae1750a6a5"
	Dec 10 06:24:58 no-preload-713838 kubelet[714]: E1210 06:24:58.452563     714 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-p6wnm" containerName="dashboard-metrics-scraper"
	Dec 10 06:24:58 no-preload-713838 kubelet[714]: I1210 06:24:58.452611     714 scope.go:122] "RemoveContainer" containerID="8cf7653df43bfb019a879555ed0b3523ed1144c91d66cad14bf2f900672b3e97"
	Dec 10 06:24:58 no-preload-713838 kubelet[714]: E1210 06:24:58.452780     714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-p6wnm_kubernetes-dashboard(655971eb-87e1-4d59-8a87-82ae1750a6a5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-p6wnm" podUID="655971eb-87e1-4d59-8a87-82ae1750a6a5"
	Dec 10 06:24:59 no-preload-713838 kubelet[714]: I1210 06:24:59.187139     714 scope.go:122] "RemoveContainer" containerID="c9af406a71bb6827180c265a5825986d235be10e8202bfcd28bd70a363cd3945"
	Dec 10 06:25:00 no-preload-713838 kubelet[714]: E1210 06:25:00.090767     714 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-hr4gk" containerName="coredns"
	Dec 10 06:25:14 no-preload-713838 kubelet[714]: E1210 06:25:14.035109     714 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-p6wnm" containerName="dashboard-metrics-scraper"
	Dec 10 06:25:14 no-preload-713838 kubelet[714]: I1210 06:25:14.035167     714 scope.go:122] "RemoveContainer" containerID="8cf7653df43bfb019a879555ed0b3523ed1144c91d66cad14bf2f900672b3e97"
	Dec 10 06:25:14 no-preload-713838 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 10 06:25:14 no-preload-713838 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 10 06:25:14 no-preload-713838 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:25:14 no-preload-713838 systemd[1]: kubelet.service: Consumed 1.779s CPU time.
	
	
	==> kubernetes-dashboard [1323310fa79d97e897f48248a3271e2d891ea27849df186d92211b9ee5b46f18] <==
	2025/12/10 06:24:35 Using namespace: kubernetes-dashboard
	2025/12/10 06:24:35 Using in-cluster config to connect to apiserver
	2025/12/10 06:24:35 Using secret token for csrf signing
	2025/12/10 06:24:35 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/10 06:24:35 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/10 06:24:35 Successful initial request to the apiserver, version: v1.35.0-beta.0
	2025/12/10 06:24:35 Generating JWE encryption key
	2025/12/10 06:24:35 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/10 06:24:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/10 06:24:36 Initializing JWE encryption key from synchronized object
	2025/12/10 06:24:36 Creating in-cluster Sidecar client
	2025/12/10 06:24:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/10 06:24:36 Serving insecurely on HTTP port: 9090
	2025/12/10 06:25:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/10 06:24:35 Starting overwatch
	
	
	==> storage-provisioner [a85f655104f8de9cbb4bfafb1587d4e6d12c001b4413e2dded406cb8f4a9411a] <==
	I1210 06:24:59.252954       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1210 06:24:59.262652       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1210 06:24:59.262746       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1210 06:24:59.265238       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:25:02.721459       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:25:06.982280       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:25:10.580692       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:25:13.635082       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:25:16.657636       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:25:16.663784       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1210 06:25:16.664044       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1210 06:25:16.664162       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"13195869-7f7c-4acf-98a0-df0b10b14e40", APIVersion:"v1", ResourceVersion:"641", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-713838_7f7be3ad-6468-403a-8fa0-d0509a1443f4 became leader
	I1210 06:25:16.664228       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-713838_7f7be3ad-6468-403a-8fa0-d0509a1443f4!
	W1210 06:25:16.667090       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:25:16.670840       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1210 06:25:16.765161       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-713838_7f7be3ad-6468-403a-8fa0-d0509a1443f4!
	W1210 06:25:18.673925       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:25:18.679417       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [c9af406a71bb6827180c265a5825986d235be10e8202bfcd28bd70a363cd3945] <==
	I1210 06:24:28.454864       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1210 06:24:58.457734       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-713838 -n no-preload-713838
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-713838 -n no-preload-713838: exit status 2 (347.29226ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-713838 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (6.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (5.91s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-133470 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-133470 --alsologtostderr -v=1: exit status 80 (1.714047791s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-133470 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 06:25:20.815597  341463 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:25:20.815703  341463 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:25:20.815715  341463 out.go:374] Setting ErrFile to fd 2...
	I1210 06:25:20.815719  341463 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:25:20.815938  341463 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8832/.minikube/bin
	I1210 06:25:20.816235  341463 out.go:368] Setting JSON to false
	I1210 06:25:20.816256  341463 mustload.go:66] Loading cluster: embed-certs-133470
	I1210 06:25:20.816714  341463 config.go:182] Loaded profile config "embed-certs-133470": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 06:25:20.817213  341463 cli_runner.go:164] Run: docker container inspect embed-certs-133470 --format={{.State.Status}}
	I1210 06:25:20.837129  341463 host.go:66] Checking if "embed-certs-133470" exists ...
	I1210 06:25:20.837513  341463 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:25:20.897486  341463 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:80 OomKillDisable:false NGoroutines:85 SystemTime:2025-12-10 06:25:20.887265227 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:25:20.898125  341463 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765151505-21409/minikube-v1.37.0-1765151505-21409-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765151505-21409-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-133470 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1210 06:25:20.900207  341463 out.go:179] * Pausing node embed-certs-133470 ... 
	I1210 06:25:20.901622  341463 host.go:66] Checking if "embed-certs-133470" exists ...
	I1210 06:25:20.901878  341463 ssh_runner.go:195] Run: systemctl --version
	I1210 06:25:20.901918  341463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-133470
	I1210 06:25:20.920979  341463 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/embed-certs-133470/id_rsa Username:docker}
	I1210 06:25:21.016769  341463 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:25:21.049347  341463 pause.go:52] kubelet running: true
	I1210 06:25:21.049427  341463 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1210 06:25:21.227827  341463 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1210 06:25:21.227924  341463 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1210 06:25:21.302211  341463 cri.go:89] found id: "2f4a3c5b106dc9be9345ac2e196e0149c6a49b366f48b0ae9bcc66efb6381bd7"
	I1210 06:25:21.302236  341463 cri.go:89] found id: "13e976b147ae71ac7ced68e8f9b72b5ec6754a28d1b1cf43d63103eda063a601"
	I1210 06:25:21.302243  341463 cri.go:89] found id: "7ed6660ccf81b6a4976447ae69ba63d0e45dd08b146be33d81085a872b17b10e"
	I1210 06:25:21.302249  341463 cri.go:89] found id: "0c31d45ef74fb05281a156cb4b2c1bfd08a7578166fa2e49f92b067ceba00ed4"
	I1210 06:25:21.302254  341463 cri.go:89] found id: "e24ec95c65e2b7512bd846c71358432fa87dca45b70403bb1e0c9397e2e56dc8"
	I1210 06:25:21.302260  341463 cri.go:89] found id: "d6469f0541702fe81ba71666ade3d8b49b710a9889eeda64a30872196f87d79b"
	I1210 06:25:21.302264  341463 cri.go:89] found id: "7648ffbcd0289f174298c84e0db8f9defb9c9e8f94bb12bce5d42d6204170ddf"
	I1210 06:25:21.302268  341463 cri.go:89] found id: "1d978d02f9539453ea47a09b2b2ab8fb9b27a2bf69492ed41a51cb35be1aa40c"
	I1210 06:25:21.302273  341463 cri.go:89] found id: "41ac6d073418d2eb1af6e3c34750732dd3f22567edf771586f1f62db7cdeebd7"
	I1210 06:25:21.302282  341463 cri.go:89] found id: "9da69852cec5c98b4d4afab830eed3a9304b8c9cb909b9c5fa82381f94dd099e"
	I1210 06:25:21.302286  341463 cri.go:89] found id: "b57c64e71446e7e9d2ba0cd5b5c15928f33d9c4625a9b1fad5eeaa44af09c95e"
	I1210 06:25:21.302290  341463 cri.go:89] found id: ""
	I1210 06:25:21.302350  341463 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 06:25:21.315018  341463 retry.go:31] will retry after 200.133434ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:25:21Z" level=error msg="open /run/runc: no such file or directory"
	I1210 06:25:21.515464  341463 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:25:21.530253  341463 pause.go:52] kubelet running: false
	I1210 06:25:21.530321  341463 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1210 06:25:21.687301  341463 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1210 06:25:21.687398  341463 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1210 06:25:21.768408  341463 cri.go:89] found id: "2f4a3c5b106dc9be9345ac2e196e0149c6a49b366f48b0ae9bcc66efb6381bd7"
	I1210 06:25:21.768429  341463 cri.go:89] found id: "13e976b147ae71ac7ced68e8f9b72b5ec6754a28d1b1cf43d63103eda063a601"
	I1210 06:25:21.768433  341463 cri.go:89] found id: "7ed6660ccf81b6a4976447ae69ba63d0e45dd08b146be33d81085a872b17b10e"
	I1210 06:25:21.768437  341463 cri.go:89] found id: "0c31d45ef74fb05281a156cb4b2c1bfd08a7578166fa2e49f92b067ceba00ed4"
	I1210 06:25:21.768440  341463 cri.go:89] found id: "e24ec95c65e2b7512bd846c71358432fa87dca45b70403bb1e0c9397e2e56dc8"
	I1210 06:25:21.768443  341463 cri.go:89] found id: "d6469f0541702fe81ba71666ade3d8b49b710a9889eeda64a30872196f87d79b"
	I1210 06:25:21.768446  341463 cri.go:89] found id: "7648ffbcd0289f174298c84e0db8f9defb9c9e8f94bb12bce5d42d6204170ddf"
	I1210 06:25:21.768449  341463 cri.go:89] found id: "1d978d02f9539453ea47a09b2b2ab8fb9b27a2bf69492ed41a51cb35be1aa40c"
	I1210 06:25:21.768452  341463 cri.go:89] found id: "41ac6d073418d2eb1af6e3c34750732dd3f22567edf771586f1f62db7cdeebd7"
	I1210 06:25:21.768463  341463 cri.go:89] found id: "9da69852cec5c98b4d4afab830eed3a9304b8c9cb909b9c5fa82381f94dd099e"
	I1210 06:25:21.768485  341463 cri.go:89] found id: "b57c64e71446e7e9d2ba0cd5b5c15928f33d9c4625a9b1fad5eeaa44af09c95e"
	I1210 06:25:21.768490  341463 cri.go:89] found id: ""
	I1210 06:25:21.768531  341463 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 06:25:21.784462  341463 retry.go:31] will retry after 397.275258ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:25:21Z" level=error msg="open /run/runc: no such file or directory"
	I1210 06:25:22.182162  341463 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:25:22.199632  341463 pause.go:52] kubelet running: false
	I1210 06:25:22.199714  341463 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1210 06:25:22.361297  341463 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1210 06:25:22.361388  341463 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1210 06:25:22.432172  341463 cri.go:89] found id: "2f4a3c5b106dc9be9345ac2e196e0149c6a49b366f48b0ae9bcc66efb6381bd7"
	I1210 06:25:22.432194  341463 cri.go:89] found id: "13e976b147ae71ac7ced68e8f9b72b5ec6754a28d1b1cf43d63103eda063a601"
	I1210 06:25:22.432199  341463 cri.go:89] found id: "7ed6660ccf81b6a4976447ae69ba63d0e45dd08b146be33d81085a872b17b10e"
	I1210 06:25:22.432202  341463 cri.go:89] found id: "0c31d45ef74fb05281a156cb4b2c1bfd08a7578166fa2e49f92b067ceba00ed4"
	I1210 06:25:22.432206  341463 cri.go:89] found id: "e24ec95c65e2b7512bd846c71358432fa87dca45b70403bb1e0c9397e2e56dc8"
	I1210 06:25:22.432209  341463 cri.go:89] found id: "d6469f0541702fe81ba71666ade3d8b49b710a9889eeda64a30872196f87d79b"
	I1210 06:25:22.432213  341463 cri.go:89] found id: "7648ffbcd0289f174298c84e0db8f9defb9c9e8f94bb12bce5d42d6204170ddf"
	I1210 06:25:22.432215  341463 cri.go:89] found id: "1d978d02f9539453ea47a09b2b2ab8fb9b27a2bf69492ed41a51cb35be1aa40c"
	I1210 06:25:22.432218  341463 cri.go:89] found id: "41ac6d073418d2eb1af6e3c34750732dd3f22567edf771586f1f62db7cdeebd7"
	I1210 06:25:22.432223  341463 cri.go:89] found id: "9da69852cec5c98b4d4afab830eed3a9304b8c9cb909b9c5fa82381f94dd099e"
	I1210 06:25:22.432226  341463 cri.go:89] found id: "b57c64e71446e7e9d2ba0cd5b5c15928f33d9c4625a9b1fad5eeaa44af09c95e"
	I1210 06:25:22.432229  341463 cri.go:89] found id: ""
	I1210 06:25:22.432266  341463 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 06:25:22.453929  341463 out.go:203] 
	W1210 06:25:22.456048  341463 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:25:22Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:25:22Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 06:25:22.456074  341463 out.go:285] * 
	* 
	W1210 06:25:22.461272  341463 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 06:25:22.463695  341463 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p embed-certs-133470 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-133470
helpers_test.go:244: (dbg) docker inspect embed-certs-133470:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3a1f3f3228b1ec53cd9f63c675c9b5091d68de47bcdbf1b5b82a14243c07aa76",
	        "Created": "2025-12-10T06:23:10.449450924Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 328138,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T06:24:19.415792942Z",
	            "FinishedAt": "2025-12-10T06:24:18.002845647Z"
	        },
	        "Image": "sha256:9dfcc37acf4d8ed51daae49d651516447e95ced4bb0b0783e8c53cb79a74f008",
	        "ResolvConfPath": "/var/lib/docker/containers/3a1f3f3228b1ec53cd9f63c675c9b5091d68de47bcdbf1b5b82a14243c07aa76/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3a1f3f3228b1ec53cd9f63c675c9b5091d68de47bcdbf1b5b82a14243c07aa76/hostname",
	        "HostsPath": "/var/lib/docker/containers/3a1f3f3228b1ec53cd9f63c675c9b5091d68de47bcdbf1b5b82a14243c07aa76/hosts",
	        "LogPath": "/var/lib/docker/containers/3a1f3f3228b1ec53cd9f63c675c9b5091d68de47bcdbf1b5b82a14243c07aa76/3a1f3f3228b1ec53cd9f63c675c9b5091d68de47bcdbf1b5b82a14243c07aa76-json.log",
	        "Name": "/embed-certs-133470",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-133470:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-133470",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3a1f3f3228b1ec53cd9f63c675c9b5091d68de47bcdbf1b5b82a14243c07aa76",
	                "LowerDir": "/var/lib/docker/overlay2/438187e60f45e0a217a5260189d029ff21902b801168e01bb30941ed2d899de5-init/diff:/var/lib/docker/overlay2/5745aee6e8b05b3a4cc4ad6aee891df9d6438d830895f70bd2a764a976802708/diff",
	                "MergedDir": "/var/lib/docker/overlay2/438187e60f45e0a217a5260189d029ff21902b801168e01bb30941ed2d899de5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/438187e60f45e0a217a5260189d029ff21902b801168e01bb30941ed2d899de5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/438187e60f45e0a217a5260189d029ff21902b801168e01bb30941ed2d899de5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-133470",
	                "Source": "/var/lib/docker/volumes/embed-certs-133470/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-133470",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-133470",
	                "name.minikube.sigs.k8s.io": "embed-certs-133470",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "7a87b967b6b27e017e523853e7262f80307b066e123f5fdc5afeb839ae07e80e",
	            "SandboxKey": "/var/run/docker/netns/7a87b967b6b2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33124"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33125"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33126"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-133470": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c997c342a102de8ded4e3e9d1b30c87213863ef3e6af404e57b008495685711b",
	                    "EndpointID": "d238e09f509aa4bf9ecf223eb8a6beb98cacdf78bbf75077c543b3b6e868ca42",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "0e:6d:1b:43:62:4b",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-133470",
	                        "3a1f3f3228b1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-133470 -n embed-certs-133470
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-133470 -n embed-certs-133470: exit status 2 (365.293289ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-133470 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-133470 logs -n 25: (1.225400325s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ stop    │ -p old-k8s-version-424086 --alsologtostderr -v=3                                                                                                                                                                                                     │ old-k8s-version-424086       │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-424086 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ old-k8s-version-424086       │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ start   │ -p old-k8s-version-424086 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-424086       │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:24 UTC │
	│ addons  │ enable metrics-server -p no-preload-713838 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-713838            │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-133470 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-133470           │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ stop    │ -p no-preload-713838 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-713838            │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:24 UTC │
	│ stop    │ -p embed-certs-133470 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-133470           │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:24 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-643991 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-643991 │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-643991 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-643991 │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:24 UTC │
	│ addons  │ enable dashboard -p no-preload-713838 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-713838            │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:24 UTC │
	│ start   │ -p no-preload-713838 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-713838            │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:25 UTC │
	│ addons  │ enable dashboard -p embed-certs-133470 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-133470           │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:24 UTC │
	│ start   │ -p embed-certs-133470 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-133470           │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:25 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-643991 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-643991 │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:24 UTC │
	│ start   │ -p default-k8s-diff-port-643991 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-643991 │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:25 UTC │
	│ image   │ old-k8s-version-424086 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-424086       │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:24 UTC │
	│ pause   │ -p old-k8s-version-424086 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-424086       │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │                     │
	│ delete  │ -p old-k8s-version-424086                                                                                                                                                                                                                            │ old-k8s-version-424086       │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:24 UTC │
	│ delete  │ -p old-k8s-version-424086                                                                                                                                                                                                                            │ old-k8s-version-424086       │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:24 UTC │
	│ start   │ -p newest-cni-126107 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-126107            │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │                     │
	│ image   │ no-preload-713838 image list --format=json                                                                                                                                                                                                           │ no-preload-713838            │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │ 10 Dec 25 06:25 UTC │
	│ pause   │ -p no-preload-713838 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-713838            │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │                     │
	│ delete  │ -p no-preload-713838                                                                                                                                                                                                                                 │ no-preload-713838            │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │                     │
	│ image   │ embed-certs-133470 image list --format=json                                                                                                                                                                                                          │ embed-certs-133470           │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │ 10 Dec 25 06:25 UTC │
	│ pause   │ -p embed-certs-133470 --alsologtostderr -v=1                                                                                                                                                                                                         │ embed-certs-133470           │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:24:59
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:24:59.327087  336887 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:24:59.327365  336887 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:24:59.327375  336887 out.go:374] Setting ErrFile to fd 2...
	I1210 06:24:59.327379  336887 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:24:59.327669  336887 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8832/.minikube/bin
	I1210 06:24:59.328143  336887 out.go:368] Setting JSON to false
	I1210 06:24:59.329429  336887 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4050,"bootTime":1765343849,"procs":361,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 06:24:59.329519  336887 start.go:143] virtualization: kvm guest
	I1210 06:24:59.331611  336887 out.go:179] * [newest-cni-126107] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 06:24:59.333096  336887 notify.go:221] Checking for updates...
	I1210 06:24:59.333116  336887 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 06:24:59.334447  336887 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:24:59.336068  336887 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-8832/kubeconfig
	I1210 06:24:59.337494  336887 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-8832/.minikube
	I1210 06:24:59.338960  336887 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 06:24:59.340340  336887 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:24:59.342187  336887 config.go:182] Loaded profile config "default-k8s-diff-port-643991": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 06:24:59.342330  336887 config.go:182] Loaded profile config "embed-certs-133470": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 06:24:59.342492  336887 config.go:182] Loaded profile config "no-preload-713838": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 06:24:59.342623  336887 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:24:59.369242  336887 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 06:24:59.369328  336887 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:24:59.432140  336887 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-10 06:24:59.420604919 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:24:59.432256  336887 docker.go:319] overlay module found
	I1210 06:24:59.435201  336887 out.go:179] * Using the docker driver based on user configuration
	W1210 06:24:55.887075  331193 pod_ready.go:104] pod "coredns-66bc5c9577-znsz6" is not "Ready", error: <nil>
	W1210 06:24:58.386507  331193 pod_ready.go:104] pod "coredns-66bc5c9577-znsz6" is not "Ready", error: <nil>
	I1210 06:24:59.436402  336887 start.go:309] selected driver: docker
	I1210 06:24:59.436415  336887 start.go:927] validating driver "docker" against <nil>
	I1210 06:24:59.436427  336887 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:24:59.436998  336887 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:24:59.496347  336887 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-10 06:24:59.486011226 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:24:59.496517  336887 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1210 06:24:59.496554  336887 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1210 06:24:59.496758  336887 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1210 06:24:59.499173  336887 out.go:179] * Using Docker driver with root privileges
	I1210 06:24:59.500516  336887 cni.go:84] Creating CNI manager for ""
	I1210 06:24:59.500598  336887 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:24:59.500612  336887 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1210 06:24:59.500684  336887 start.go:353] cluster config:
	{Name:newest-cni-126107 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-126107 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:24:59.502093  336887 out.go:179] * Starting "newest-cni-126107" primary control-plane node in "newest-cni-126107" cluster
	I1210 06:24:59.503450  336887 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 06:24:59.504798  336887 out.go:179] * Pulling base image v0.0.48-1765319469-22089 ...
	I1210 06:24:59.506022  336887 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1210 06:24:59.506091  336887 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-8832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1210 06:24:59.506102  336887 cache.go:65] Caching tarball of preloaded images
	I1210 06:24:59.506114  336887 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon
	I1210 06:24:59.506191  336887 preload.go:238] Found /home/jenkins/minikube-integration/22089-8832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 06:24:59.506203  336887 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1210 06:24:59.506300  336887 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/config.json ...
	I1210 06:24:59.506323  336887 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/config.json: {Name:mkdf58f074b298e370024a6ce1eb0198fc1a1932 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:24:59.529599  336887 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon, skipping pull
	I1210 06:24:59.529619  336887 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca exists in daemon, skipping load
	I1210 06:24:59.529645  336887 cache.go:243] Successfully downloaded all kic artifacts
	I1210 06:24:59.529672  336887 start.go:360] acquireMachinesLock for newest-cni-126107: {Name:mk95835e60131d01841dcfa433d5776bf10a491c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:24:59.529766  336887 start.go:364] duration metric: took 78.432µs to acquireMachinesLock for "newest-cni-126107"
	I1210 06:24:59.529787  336887 start.go:93] Provisioning new machine with config: &{Name:newest-cni-126107 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-126107 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 06:24:59.529851  336887 start.go:125] createHost starting for "" (driver="docker")
	W1210 06:24:58.946860  326955 pod_ready.go:104] pod "coredns-7d764666f9-hr4gk" is not "Ready", error: <nil>
	I1210 06:25:00.446892  326955 pod_ready.go:94] pod "coredns-7d764666f9-hr4gk" is "Ready"
	I1210 06:25:00.446917  326955 pod_ready.go:86] duration metric: took 31.006503405s for pod "coredns-7d764666f9-hr4gk" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:00.449783  326955 pod_ready.go:83] waiting for pod "etcd-no-preload-713838" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:00.454644  326955 pod_ready.go:94] pod "etcd-no-preload-713838" is "Ready"
	I1210 06:25:00.454673  326955 pod_ready.go:86] duration metric: took 4.863318ms for pod "etcd-no-preload-713838" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:00.457203  326955 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-713838" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:00.462197  326955 pod_ready.go:94] pod "kube-apiserver-no-preload-713838" is "Ready"
	I1210 06:25:00.462227  326955 pod_ready.go:86] duration metric: took 4.996726ms for pod "kube-apiserver-no-preload-713838" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:00.464859  326955 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-713838" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:00.643687  326955 pod_ready.go:94] pod "kube-controller-manager-no-preload-713838" is "Ready"
	I1210 06:25:00.643711  326955 pod_ready.go:86] duration metric: took 178.834657ms for pod "kube-controller-manager-no-preload-713838" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:00.844018  326955 pod_ready.go:83] waiting for pod "kube-proxy-c62hk" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:01.244075  326955 pod_ready.go:94] pod "kube-proxy-c62hk" is "Ready"
	I1210 06:25:01.244105  326955 pod_ready.go:86] duration metric: took 400.060427ms for pod "kube-proxy-c62hk" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:01.445041  326955 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-713838" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:01.843827  326955 pod_ready.go:94] pod "kube-scheduler-no-preload-713838" is "Ready"
	I1210 06:25:01.843854  326955 pod_ready.go:86] duration metric: took 398.788804ms for pod "kube-scheduler-no-preload-713838" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:01.843867  326955 pod_ready.go:40] duration metric: took 32.407570406s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 06:25:01.891782  326955 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1210 06:25:01.897299  326955 out.go:179] * Done! kubectl is now configured to use "no-preload-713838" cluster and "default" namespace by default
	W1210 06:25:00.080872  327833 pod_ready.go:104] pod "coredns-66bc5c9577-gw75x" is not "Ready", error: <nil>
	W1210 06:25:02.579615  327833 pod_ready.go:104] pod "coredns-66bc5c9577-gw75x" is not "Ready", error: <nil>
	I1210 06:24:59.532875  336887 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1210 06:24:59.533186  336887 start.go:159] libmachine.API.Create for "newest-cni-126107" (driver="docker")
	I1210 06:24:59.533225  336887 client.go:173] LocalClient.Create starting
	I1210 06:24:59.533327  336887 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem
	I1210 06:24:59.533388  336887 main.go:143] libmachine: Decoding PEM data...
	I1210 06:24:59.533416  336887 main.go:143] libmachine: Parsing certificate...
	I1210 06:24:59.533500  336887 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22089-8832/.minikube/certs/cert.pem
	I1210 06:24:59.533540  336887 main.go:143] libmachine: Decoding PEM data...
	I1210 06:24:59.533557  336887 main.go:143] libmachine: Parsing certificate...
	I1210 06:24:59.533982  336887 cli_runner.go:164] Run: docker network inspect newest-cni-126107 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 06:24:59.552885  336887 cli_runner.go:211] docker network inspect newest-cni-126107 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 06:24:59.552988  336887 network_create.go:284] running [docker network inspect newest-cni-126107] to gather additional debugging logs...
	I1210 06:24:59.553008  336887 cli_runner.go:164] Run: docker network inspect newest-cni-126107
	W1210 06:24:59.572451  336887 cli_runner.go:211] docker network inspect newest-cni-126107 returned with exit code 1
	I1210 06:24:59.572534  336887 network_create.go:287] error running [docker network inspect newest-cni-126107]: docker network inspect newest-cni-126107: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-126107 not found
	I1210 06:24:59.572551  336887 network_create.go:289] output of [docker network inspect newest-cni-126107]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-126107 not found
	
	** /stderr **
	I1210 06:24:59.572710  336887 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:24:59.592775  336887 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-93569dd44e03 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:22:34:6b:89:a0:37} reservation:<nil>}
	I1210 06:24:59.593342  336887 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-2fbfa5ca31a8 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:6a:30:9e:0a:da:73} reservation:<nil>}
	I1210 06:24:59.594133  336887 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-68b4fc4b224b IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:0a:d7:21:69:83} reservation:<nil>}
	I1210 06:24:59.594915  336887 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-0a24a8ad90ff IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:32:ea:e5:16:4c:6f} reservation:<nil>}
	I1210 06:24:59.595927  336887 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001dd18e0}
	I1210 06:24:59.595955  336887 network_create.go:124] attempt to create docker network newest-cni-126107 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1210 06:24:59.596007  336887 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-126107 newest-cni-126107
	I1210 06:24:59.648242  336887 network_create.go:108] docker network newest-cni-126107 192.168.85.0/24 created
	I1210 06:24:59.648276  336887 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-126107" container
	I1210 06:24:59.648334  336887 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 06:24:59.667592  336887 cli_runner.go:164] Run: docker volume create newest-cni-126107 --label name.minikube.sigs.k8s.io=newest-cni-126107 --label created_by.minikube.sigs.k8s.io=true
	I1210 06:24:59.686982  336887 oci.go:103] Successfully created a docker volume newest-cni-126107
	I1210 06:24:59.687084  336887 cli_runner.go:164] Run: docker run --rm --name newest-cni-126107-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-126107 --entrypoint /usr/bin/test -v newest-cni-126107:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca -d /var/lib
	I1210 06:25:00.115171  336887 oci.go:107] Successfully prepared a docker volume newest-cni-126107
	I1210 06:25:00.115245  336887 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1210 06:25:00.115259  336887 kic.go:194] Starting extracting preloaded images to volume ...
	I1210 06:25:00.115360  336887 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22089-8832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-126107:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca -I lz4 -xf /preloaded.tar -C /extractDir
	I1210 06:25:04.112675  336887 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22089-8832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-126107:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca -I lz4 -xf /preloaded.tar -C /extractDir: (3.997248616s)
	I1210 06:25:04.112712  336887 kic.go:203] duration metric: took 3.997449096s to extract preloaded images to volume ...
	W1210 06:25:04.112837  336887 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1210 06:25:04.112877  336887 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1210 06:25:04.112928  336887 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 06:25:04.172016  336887 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-126107 --name newest-cni-126107 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-126107 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-126107 --network newest-cni-126107 --ip 192.168.85.2 --volume newest-cni-126107:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca
	W1210 06:25:00.387118  331193 pod_ready.go:104] pod "coredns-66bc5c9577-znsz6" is not "Ready", error: <nil>
	W1210 06:25:02.917573  331193 pod_ready.go:104] pod "coredns-66bc5c9577-znsz6" is not "Ready", error: <nil>
	W1210 06:25:04.579873  327833 pod_ready.go:104] pod "coredns-66bc5c9577-gw75x" is not "Ready", error: <nil>
	W1210 06:25:06.580394  327833 pod_ready.go:104] pod "coredns-66bc5c9577-gw75x" is not "Ready", error: <nil>
	I1210 06:25:07.580576  327833 pod_ready.go:94] pod "coredns-66bc5c9577-gw75x" is "Ready"
	I1210 06:25:07.580605  327833 pod_ready.go:86] duration metric: took 37.506619554s for pod "coredns-66bc5c9577-gw75x" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:07.583509  327833 pod_ready.go:83] waiting for pod "etcd-embed-certs-133470" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:07.587865  327833 pod_ready.go:94] pod "etcd-embed-certs-133470" is "Ready"
	I1210 06:25:07.587890  327833 pod_ready.go:86] duration metric: took 4.359471ms for pod "etcd-embed-certs-133470" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:07.590170  327833 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-133470" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:07.594746  327833 pod_ready.go:94] pod "kube-apiserver-embed-certs-133470" is "Ready"
	I1210 06:25:07.594774  327833 pod_ready.go:86] duration metric: took 4.57905ms for pod "kube-apiserver-embed-certs-133470" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:07.596975  327833 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-133470" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:07.778320  327833 pod_ready.go:94] pod "kube-controller-manager-embed-certs-133470" is "Ready"
	I1210 06:25:07.778347  327833 pod_ready.go:86] duration metric: took 181.346408ms for pod "kube-controller-manager-embed-certs-133470" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:07.979006  327833 pod_ready.go:83] waiting for pod "kube-proxy-fkdk9" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:08.378607  327833 pod_ready.go:94] pod "kube-proxy-fkdk9" is "Ready"
	I1210 06:25:08.378631  327833 pod_ready.go:86] duration metric: took 399.601345ms for pod "kube-proxy-fkdk9" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:08.578014  327833 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-133470" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:08.978761  327833 pod_ready.go:94] pod "kube-scheduler-embed-certs-133470" is "Ready"
	I1210 06:25:08.978787  327833 pod_ready.go:86] duration metric: took 400.749384ms for pod "kube-scheduler-embed-certs-133470" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:08.978798  327833 pod_ready.go:40] duration metric: took 38.909473428s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 06:25:09.028286  327833 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1210 06:25:09.030218  327833 out.go:179] * Done! kubectl is now configured to use "embed-certs-133470" cluster and "default" namespace by default
	I1210 06:25:04.481386  336887 cli_runner.go:164] Run: docker container inspect newest-cni-126107 --format={{.State.Running}}
	I1210 06:25:04.502244  336887 cli_runner.go:164] Run: docker container inspect newest-cni-126107 --format={{.State.Status}}
	I1210 06:25:04.522735  336887 cli_runner.go:164] Run: docker exec newest-cni-126107 stat /var/lib/dpkg/alternatives/iptables
	I1210 06:25:04.571010  336887 oci.go:144] the created container "newest-cni-126107" has a running status.
	I1210 06:25:04.571044  336887 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22089-8832/.minikube/machines/newest-cni-126107/id_rsa...
	I1210 06:25:04.663409  336887 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22089-8832/.minikube/machines/newest-cni-126107/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 06:25:04.690550  336887 cli_runner.go:164] Run: docker container inspect newest-cni-126107 --format={{.State.Status}}
	I1210 06:25:04.713575  336887 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 06:25:04.713604  336887 kic_runner.go:114] Args: [docker exec --privileged newest-cni-126107 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 06:25:04.767064  336887 cli_runner.go:164] Run: docker container inspect newest-cni-126107 --format={{.State.Status}}
	I1210 06:25:04.791773  336887 machine.go:94] provisionDockerMachine start ...
	I1210 06:25:04.791873  336887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:04.819325  336887 main.go:143] libmachine: Using SSH client type: native
	I1210 06:25:04.819813  336887 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1210 06:25:04.819834  336887 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 06:25:04.820667  336887 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1210 06:25:07.958166  336887 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-126107
	
	I1210 06:25:07.958195  336887 ubuntu.go:182] provisioning hostname "newest-cni-126107"
	I1210 06:25:07.958260  336887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:07.980501  336887 main.go:143] libmachine: Using SSH client type: native
	I1210 06:25:07.980710  336887 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1210 06:25:07.980728  336887 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-126107 && echo "newest-cni-126107" | sudo tee /etc/hostname
	I1210 06:25:08.127040  336887 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-126107
	
	I1210 06:25:08.127128  336887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:08.147687  336887 main.go:143] libmachine: Using SSH client type: native
	I1210 06:25:08.147963  336887 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1210 06:25:08.147982  336887 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-126107' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-126107/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-126107' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 06:25:08.283513  336887 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:25:08.283545  336887 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22089-8832/.minikube CaCertPath:/home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22089-8832/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22089-8832/.minikube}
	I1210 06:25:08.283569  336887 ubuntu.go:190] setting up certificates
	I1210 06:25:08.283582  336887 provision.go:84] configureAuth start
	I1210 06:25:08.283641  336887 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-126107
	I1210 06:25:08.304777  336887 provision.go:143] copyHostCerts
	I1210 06:25:08.304859  336887 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-8832/.minikube/ca.pem, removing ...
	I1210 06:25:08.304870  336887 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-8832/.minikube/ca.pem
	I1210 06:25:08.304943  336887 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22089-8832/.minikube/ca.pem (1078 bytes)
	I1210 06:25:08.305028  336887 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-8832/.minikube/cert.pem, removing ...
	I1210 06:25:08.305036  336887 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-8832/.minikube/cert.pem
	I1210 06:25:08.305061  336887 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22089-8832/.minikube/cert.pem (1123 bytes)
	I1210 06:25:08.305130  336887 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-8832/.minikube/key.pem, removing ...
	I1210 06:25:08.305138  336887 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-8832/.minikube/key.pem
	I1210 06:25:08.305161  336887 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22089-8832/.minikube/key.pem (1675 bytes)
	I1210 06:25:08.305231  336887 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22089-8832/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca-key.pem org=jenkins.newest-cni-126107 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-126107]
	I1210 06:25:08.358046  336887 provision.go:177] copyRemoteCerts
	I1210 06:25:08.358115  336887 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 06:25:08.358153  336887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:08.378428  336887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/newest-cni-126107/id_rsa Username:docker}
	I1210 06:25:08.475365  336887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 06:25:08.497101  336887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 06:25:08.517033  336887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 06:25:08.536354  336887 provision.go:87] duration metric: took 252.752199ms to configureAuth
	I1210 06:25:08.536379  336887 ubuntu.go:206] setting minikube options for container-runtime
	I1210 06:25:08.536554  336887 config.go:182] Loaded profile config "newest-cni-126107": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 06:25:08.536656  336887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:08.556388  336887 main.go:143] libmachine: Using SSH client type: native
	I1210 06:25:08.556749  336887 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1210 06:25:08.556781  336887 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 06:25:08.835275  336887 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 06:25:08.835301  336887 machine.go:97] duration metric: took 4.043503325s to provisionDockerMachine
	I1210 06:25:08.835313  336887 client.go:176] duration metric: took 9.302078213s to LocalClient.Create
	I1210 06:25:08.835335  336887 start.go:167] duration metric: took 9.302149263s to libmachine.API.Create "newest-cni-126107"
	I1210 06:25:08.835345  336887 start.go:293] postStartSetup for "newest-cni-126107" (driver="docker")
	I1210 06:25:08.835361  336887 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 06:25:08.835432  336887 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 06:25:08.835497  336887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:08.855854  336887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/newest-cni-126107/id_rsa Username:docker}
	I1210 06:25:08.956961  336887 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 06:25:08.961167  336887 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 06:25:08.961201  336887 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 06:25:08.961213  336887 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-8832/.minikube/addons for local assets ...
	I1210 06:25:08.961271  336887 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-8832/.minikube/files for local assets ...
	I1210 06:25:08.961344  336887 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-8832/.minikube/files/etc/ssl/certs/123742.pem -> 123742.pem in /etc/ssl/certs
	I1210 06:25:08.961433  336887 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 06:25:08.970695  336887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/files/etc/ssl/certs/123742.pem --> /etc/ssl/certs/123742.pem (1708 bytes)
	I1210 06:25:08.995442  336887 start.go:296] duration metric: took 160.082878ms for postStartSetup
	I1210 06:25:08.995880  336887 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-126107
	I1210 06:25:09.016559  336887 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/config.json ...
	I1210 06:25:09.016908  336887 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:25:09.016964  336887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:09.038838  336887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/newest-cni-126107/id_rsa Username:docker}
	I1210 06:25:09.139907  336887 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 06:25:09.145902  336887 start.go:128] duration metric: took 9.616033039s to createHost
	I1210 06:25:09.145930  336887 start.go:83] releasing machines lock for "newest-cni-126107", held for 9.616152275s
	I1210 06:25:09.146007  336887 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-126107
	I1210 06:25:09.166587  336887 ssh_runner.go:195] Run: cat /version.json
	I1210 06:25:09.166650  336887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:09.166669  336887 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 06:25:09.166759  336887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:09.189521  336887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/newest-cni-126107/id_rsa Username:docker}
	I1210 06:25:09.189525  336887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/newest-cni-126107/id_rsa Username:docker}
	I1210 06:25:09.284007  336887 ssh_runner.go:195] Run: systemctl --version
	W1210 06:25:05.386403  331193 pod_ready.go:104] pod "coredns-66bc5c9577-znsz6" is not "Ready", error: <nil>
	W1210 06:25:07.387202  331193 pod_ready.go:104] pod "coredns-66bc5c9577-znsz6" is not "Ready", error: <nil>
	W1210 06:25:09.387389  331193 pod_ready.go:104] pod "coredns-66bc5c9577-znsz6" is not "Ready", error: <nil>
	I1210 06:25:09.351948  336887 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 06:25:09.392017  336887 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 06:25:09.397100  336887 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 06:25:09.397159  336887 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 06:25:09.426437  336887 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 06:25:09.426486  336887 start.go:496] detecting cgroup driver to use...
	I1210 06:25:09.426524  336887 detect.go:190] detected "systemd" cgroup driver on host os
	I1210 06:25:09.426570  336887 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 06:25:09.444100  336887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 06:25:09.457503  336887 docker.go:218] disabling cri-docker service (if available) ...
	I1210 06:25:09.457569  336887 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 06:25:09.475303  336887 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 06:25:09.495265  336887 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 06:25:09.584209  336887 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 06:25:09.673201  336887 docker.go:234] disabling docker service ...
	I1210 06:25:09.673262  336887 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 06:25:09.692964  336887 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 06:25:09.706562  336887 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 06:25:09.794361  336887 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 06:25:09.886009  336887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 06:25:09.899964  336887 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:25:09.915638  336887 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 06:25:09.915690  336887 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:25:09.927534  336887 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1210 06:25:09.927591  336887 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:25:09.937774  336887 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:25:09.947722  336887 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:25:09.957780  336887 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 06:25:09.967038  336887 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:25:09.977926  336887 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:25:09.993658  336887 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:25:10.003638  336887 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 06:25:10.012100  336887 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 06:25:10.021305  336887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:25:10.110274  336887 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 06:25:10.246619  336887 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 06:25:10.246690  336887 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 06:25:10.251096  336887 start.go:564] Will wait 60s for crictl version
	I1210 06:25:10.251165  336887 ssh_runner.go:195] Run: which crictl
	I1210 06:25:10.255306  336887 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 06:25:10.283066  336887 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 06:25:10.283157  336887 ssh_runner.go:195] Run: crio --version
	I1210 06:25:10.313027  336887 ssh_runner.go:195] Run: crio --version
	I1210 06:25:10.346493  336887 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1210 06:25:10.348155  336887 cli_runner.go:164] Run: docker network inspect newest-cni-126107 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:25:10.367398  336887 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1210 06:25:10.371843  336887 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:25:10.385684  336887 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1210 06:25:10.387117  336887 kubeadm.go:884] updating cluster {Name:newest-cni-126107 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-126107 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 06:25:10.387245  336887 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1210 06:25:10.387300  336887 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:25:10.421783  336887 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 06:25:10.421805  336887 crio.go:433] Images already preloaded, skipping extraction
	I1210 06:25:10.421852  336887 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:25:10.448367  336887 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 06:25:10.448389  336887 cache_images.go:86] Images are preloaded, skipping loading
	I1210 06:25:10.448395  336887 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 crio true true} ...
	I1210 06:25:10.448494  336887 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-126107 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-126107 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 06:25:10.448573  336887 ssh_runner.go:195] Run: crio config
	I1210 06:25:10.498037  336887 cni.go:84] Creating CNI manager for ""
	I1210 06:25:10.498063  336887 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:25:10.498081  336887 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1210 06:25:10.498120  336887 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-126107 NodeName:newest-cni-126107 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 06:25:10.498246  336887 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-126107"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 06:25:10.498306  336887 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1210 06:25:10.507229  336887 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 06:25:10.507302  336887 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 06:25:10.516385  336887 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1210 06:25:10.530854  336887 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1210 06:25:10.548260  336887 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1210 06:25:10.563281  336887 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1210 06:25:10.567436  336887 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:25:10.578747  336887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:25:10.660880  336887 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:25:10.688248  336887 certs.go:69] Setting up /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107 for IP: 192.168.85.2
	I1210 06:25:10.688268  336887 certs.go:195] generating shared ca certs ...
	I1210 06:25:10.688286  336887 certs.go:227] acquiring lock for ca certs: {Name:mkfe434cecfa5233603e8d01fb39a21abb4f8ca4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:25:10.688431  336887 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22089-8832/.minikube/ca.key
	I1210 06:25:10.688526  336887 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22089-8832/.minikube/proxy-client-ca.key
	I1210 06:25:10.688544  336887 certs.go:257] generating profile certs ...
	I1210 06:25:10.688612  336887 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/client.key
	I1210 06:25:10.688636  336887 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/client.crt with IP's: []
	I1210 06:25:10.813463  336887 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/client.crt ...
	I1210 06:25:10.813530  336887 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/client.crt: {Name:mk7009f3bf80c2397e5ae6cdebdca2735a7f7b63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:25:10.813756  336887 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/client.key ...
	I1210 06:25:10.813772  336887 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/client.key: {Name:mk6d255207a819b82a749c48b0009054007ff91d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:25:10.813864  336887 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/apiserver.key.23b909bf
	I1210 06:25:10.813882  336887 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/apiserver.crt.23b909bf with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1210 06:25:11.022417  336887 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/apiserver.crt.23b909bf ...
	I1210 06:25:11.022443  336887 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/apiserver.crt.23b909bf: {Name:mk09a2e21f902ac4eed926780c1f90cb426b5a2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:25:11.022619  336887 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/apiserver.key.23b909bf ...
	I1210 06:25:11.022632  336887 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/apiserver.key.23b909bf: {Name:mkc73ed6c35fb6a21244daf518e5b2d0a7440a3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:25:11.022704  336887 certs.go:382] copying /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/apiserver.crt.23b909bf -> /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/apiserver.crt
	I1210 06:25:11.022778  336887 certs.go:386] copying /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/apiserver.key.23b909bf -> /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/apiserver.key
	I1210 06:25:11.022831  336887 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/proxy-client.key
	I1210 06:25:11.022848  336887 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/proxy-client.crt with IP's: []
	I1210 06:25:11.088507  336887 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/proxy-client.crt ...
	I1210 06:25:11.088534  336887 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/proxy-client.crt: {Name:mkdd3c9abbfeb78fdbbafdaf53f324a4a2e625ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:25:11.088686  336887 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/proxy-client.key ...
	I1210 06:25:11.088699  336887 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/proxy-client.key: {Name:mkd22ad5ae4429236c87cce8641338a9393df47a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:25:11.088869  336887 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/12374.pem (1338 bytes)
	W1210 06:25:11.088906  336887 certs.go:480] ignoring /home/jenkins/minikube-integration/22089-8832/.minikube/certs/12374_empty.pem, impossibly tiny 0 bytes
	I1210 06:25:11.088917  336887 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca-key.pem (1679 bytes)
	I1210 06:25:11.088939  336887 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem (1078 bytes)
	I1210 06:25:11.088963  336887 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/cert.pem (1123 bytes)
	I1210 06:25:11.088988  336887 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/key.pem (1675 bytes)
	I1210 06:25:11.089034  336887 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/files/etc/ssl/certs/123742.pem (1708 bytes)
	I1210 06:25:11.089621  336887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 06:25:11.108552  336887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 06:25:11.127416  336887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 06:25:11.146079  336887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 06:25:11.164732  336887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 06:25:11.183864  336887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 06:25:11.202457  336887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 06:25:11.221380  336887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 06:25:11.241165  336887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/files/etc/ssl/certs/123742.pem --> /usr/share/ca-certificates/123742.pem (1708 bytes)
	I1210 06:25:11.262201  336887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 06:25:11.282304  336887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/certs/12374.pem --> /usr/share/ca-certificates/12374.pem (1338 bytes)
	I1210 06:25:11.302104  336887 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 06:25:11.316208  336887 ssh_runner.go:195] Run: openssl version
	I1210 06:25:11.323011  336887 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12374.pem
	I1210 06:25:11.331150  336887 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12374.pem /etc/ssl/certs/12374.pem
	I1210 06:25:11.339353  336887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12374.pem
	I1210 06:25:11.343453  336887 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:52 /usr/share/ca-certificates/12374.pem
	I1210 06:25:11.343539  336887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12374.pem
	I1210 06:25:11.378191  336887 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 06:25:11.387532  336887 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/12374.pem /etc/ssl/certs/51391683.0
	I1210 06:25:11.395709  336887 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/123742.pem
	I1210 06:25:11.403915  336887 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/123742.pem /etc/ssl/certs/123742.pem
	I1210 06:25:11.413083  336887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/123742.pem
	I1210 06:25:11.417256  336887 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:52 /usr/share/ca-certificates/123742.pem
	I1210 06:25:11.417315  336887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/123742.pem
	I1210 06:25:11.452744  336887 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 06:25:11.460975  336887 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/123742.pem /etc/ssl/certs/3ec20f2e.0
	I1210 06:25:11.468848  336887 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:25:11.477072  336887 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 06:25:11.485572  336887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:25:11.490083  336887 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:25:11.490144  336887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:25:11.529873  336887 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 06:25:11.538675  336887 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 06:25:11.547942  336887 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:25:11.552437  336887 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 06:25:11.552529  336887 kubeadm.go:401] StartCluster: {Name:newest-cni-126107 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-126107 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:25:11.552617  336887 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 06:25:11.552673  336887 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:25:11.582819  336887 cri.go:89] found id: ""
	I1210 06:25:11.582893  336887 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 06:25:11.591576  336887 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 06:25:11.600085  336887 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 06:25:11.600143  336887 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 06:25:11.608700  336887 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 06:25:11.608723  336887 kubeadm.go:158] found existing configuration files:
	
	I1210 06:25:11.608773  336887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 06:25:11.617207  336887 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 06:25:11.617265  336887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 06:25:11.625691  336887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 06:25:11.634058  336887 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 06:25:11.634138  336887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 06:25:11.642174  336887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 06:25:11.650696  336887 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 06:25:11.650751  336887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 06:25:11.658854  336887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 06:25:11.667261  336887 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 06:25:11.667309  336887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 06:25:11.675445  336887 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 06:25:11.717793  336887 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1210 06:25:11.717857  336887 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 06:25:11.787773  336887 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 06:25:11.787862  336887 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1210 06:25:11.787918  336887 kubeadm.go:319] OS: Linux
	I1210 06:25:11.788013  336887 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 06:25:11.788088  336887 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 06:25:11.788209  336887 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 06:25:11.788287  336887 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 06:25:11.788329  336887 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 06:25:11.788400  336887 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 06:25:11.788501  336887 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 06:25:11.788573  336887 kubeadm.go:319] CGROUPS_IO: enabled
	I1210 06:25:11.851680  336887 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 06:25:11.851818  336887 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 06:25:11.851989  336887 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 06:25:11.859860  336887 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 06:25:11.863122  336887 out.go:252]   - Generating certificates and keys ...
	I1210 06:25:11.863226  336887 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 06:25:11.863328  336887 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 06:25:11.994891  336887 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 06:25:12.216319  336887 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 06:25:12.263074  336887 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 06:25:12.317348  336887 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 06:25:12.348525  336887 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 06:25:12.348673  336887 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-126107] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1210 06:25:12.453542  336887 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 06:25:12.453734  336887 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-126107] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1210 06:25:12.554979  336887 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 06:25:12.639691  336887 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 06:25:12.675769  336887 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 06:25:12.675887  336887 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 06:25:12.733954  336887 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 06:25:12.762974  336887 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 06:25:12.895579  336887 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 06:25:12.968568  336887 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 06:25:13.242877  336887 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 06:25:13.243493  336887 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 06:25:13.247727  336887 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 06:25:13.249454  336887 out.go:252]   - Booting up control plane ...
	I1210 06:25:13.249584  336887 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 06:25:13.249689  336887 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 06:25:13.249772  336887 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 06:25:13.266130  336887 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 06:25:13.266243  336887 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 06:25:13.273740  336887 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 06:25:13.274070  336887 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 06:25:13.274119  336887 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 06:25:13.387904  336887 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 06:25:13.388113  336887 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 06:25:13.888860  336887 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 500.995328ms
	I1210 06:25:13.892049  336887 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1210 06:25:13.892166  336887 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1210 06:25:13.892313  336887 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1210 06:25:13.892419  336887 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1210 06:25:11.887626  331193 pod_ready.go:104] pod "coredns-66bc5c9577-znsz6" is not "Ready", error: <nil>
	W1210 06:25:14.389916  331193 pod_ready.go:104] pod "coredns-66bc5c9577-znsz6" is not "Ready", error: <nil>
	I1210 06:25:14.896145  336887 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.004021858s
	I1210 06:25:16.123662  336887 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.231275136s
	I1210 06:25:17.894620  336887 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.00240365s
	I1210 06:25:17.919519  336887 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 06:25:17.933110  336887 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 06:25:17.946133  336887 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 06:25:17.946406  336887 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-126107 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 06:25:17.956662  336887 kubeadm.go:319] [bootstrap-token] Using token: x794l4.dwxrqyazh7co8i2b
	I1210 06:25:17.958956  336887 out.go:252]   - Configuring RBAC rules ...
	I1210 06:25:17.959110  336887 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 06:25:17.962931  336887 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 06:25:17.970206  336887 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 06:25:17.974857  336887 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 06:25:17.978201  336887 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 06:25:17.981820  336887 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 06:25:18.305622  336887 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 06:25:18.724389  336887 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1210 06:25:19.303999  336887 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1210 06:25:19.305073  336887 kubeadm.go:319] 
	I1210 06:25:19.305166  336887 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1210 06:25:19.305178  336887 kubeadm.go:319] 
	I1210 06:25:19.305276  336887 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1210 06:25:19.305284  336887 kubeadm.go:319] 
	I1210 06:25:19.305325  336887 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1210 06:25:19.305407  336887 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 06:25:19.305518  336887 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 06:25:19.305539  336887 kubeadm.go:319] 
	I1210 06:25:19.305612  336887 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1210 06:25:19.305622  336887 kubeadm.go:319] 
	I1210 06:25:19.305692  336887 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 06:25:19.305701  336887 kubeadm.go:319] 
	I1210 06:25:19.305779  336887 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1210 06:25:19.305879  336887 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 06:25:19.305980  336887 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 06:25:19.305989  336887 kubeadm.go:319] 
	I1210 06:25:19.306147  336887 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 06:25:19.306259  336887 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1210 06:25:19.306268  336887 kubeadm.go:319] 
	I1210 06:25:19.306392  336887 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token x794l4.dwxrqyazh7co8i2b \
	I1210 06:25:19.306553  336887 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:63e262019a0228173b835d7feaf739daf8c2f986042fc20415163ebad5fe89a5 \
	I1210 06:25:19.306586  336887 kubeadm.go:319] 	--control-plane 
	I1210 06:25:19.306595  336887 kubeadm.go:319] 
	I1210 06:25:19.306723  336887 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1210 06:25:19.306738  336887 kubeadm.go:319] 
	I1210 06:25:19.306834  336887 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token x794l4.dwxrqyazh7co8i2b \
	I1210 06:25:19.306968  336887 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:63e262019a0228173b835d7feaf739daf8c2f986042fc20415163ebad5fe89a5 
	I1210 06:25:19.309760  336887 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1210 06:25:19.309893  336887 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 06:25:19.309921  336887 cni.go:84] Creating CNI manager for ""
	I1210 06:25:19.309935  336887 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:25:19.312593  336887 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1210 06:25:19.314078  336887 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1210 06:25:19.319527  336887 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl ...
	I1210 06:25:19.319547  336887 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	W1210 06:25:16.888133  331193 pod_ready.go:104] pod "coredns-66bc5c9577-znsz6" is not "Ready", error: <nil>
	W1210 06:25:19.387854  331193 pod_ready.go:104] pod "coredns-66bc5c9577-znsz6" is not "Ready", error: <nil>
	I1210 06:25:20.387613  331193 pod_ready.go:94] pod "coredns-66bc5c9577-znsz6" is "Ready"
	I1210 06:25:20.387649  331193 pod_ready.go:86] duration metric: took 37.506338739s for pod "coredns-66bc5c9577-znsz6" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:20.390589  331193 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-643991" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:20.394950  331193 pod_ready.go:94] pod "etcd-default-k8s-diff-port-643991" is "Ready"
	I1210 06:25:20.394970  331193 pod_ready.go:86] duration metric: took 4.358753ms for pod "etcd-default-k8s-diff-port-643991" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:20.397078  331193 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-643991" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:20.401552  331193 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-643991" is "Ready"
	I1210 06:25:20.401582  331193 pod_ready.go:86] duration metric: took 4.480286ms for pod "kube-apiserver-default-k8s-diff-port-643991" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:20.403436  331193 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-643991" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:20.586026  331193 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-643991" is "Ready"
	I1210 06:25:20.586066  331193 pod_ready.go:86] duration metric: took 182.609502ms for pod "kube-controller-manager-default-k8s-diff-port-643991" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:20.785406  331193 pod_ready.go:83] waiting for pod "kube-proxy-mkpzc" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:21.185282  331193 pod_ready.go:94] pod "kube-proxy-mkpzc" is "Ready"
	I1210 06:25:21.185312  331193 pod_ready.go:86] duration metric: took 399.878814ms for pod "kube-proxy-mkpzc" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:21.385632  331193 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-643991" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:21.785630  331193 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-643991" is "Ready"
	I1210 06:25:21.785657  331193 pod_ready.go:86] duration metric: took 399.99741ms for pod "kube-scheduler-default-k8s-diff-port-643991" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:21.785671  331193 pod_ready.go:40] duration metric: took 38.908172562s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 06:25:21.838180  331193 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1210 06:25:21.841707  331193 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-643991" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 10 06:24:41 embed-certs-133470 crio[568]: time="2025-12-10T06:24:41.255283618Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 10 06:24:41 embed-certs-133470 crio[568]: time="2025-12-10T06:24:41.263029369Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 10 06:24:41 embed-certs-133470 crio[568]: time="2025-12-10T06:24:41.263070304Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 10 06:24:58 embed-certs-133470 crio[568]: time="2025-12-10T06:24:58.939949314Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=b34fdfe1-ebba-4483-81e1-52488fedd961 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:24:58 embed-certs-133470 crio[568]: time="2025-12-10T06:24:58.941010657Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=8af157f9-6adc-46e4-ab85-7204c9907afe name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:24:58 embed-certs-133470 crio[568]: time="2025-12-10T06:24:58.942036291Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dbsf6/dashboard-metrics-scraper" id=1e92ce76-b6ac-485d-89bf-757ccd4e18b7 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:24:58 embed-certs-133470 crio[568]: time="2025-12-10T06:24:58.942187192Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:24:58 embed-certs-133470 crio[568]: time="2025-12-10T06:24:58.948731496Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:24:58 embed-certs-133470 crio[568]: time="2025-12-10T06:24:58.949374863Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:24:58 embed-certs-133470 crio[568]: time="2025-12-10T06:24:58.991580942Z" level=info msg="Created container 9da69852cec5c98b4d4afab830eed3a9304b8c9cb909b9c5fa82381f94dd099e: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dbsf6/dashboard-metrics-scraper" id=1e92ce76-b6ac-485d-89bf-757ccd4e18b7 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:24:58 embed-certs-133470 crio[568]: time="2025-12-10T06:24:58.992270241Z" level=info msg="Starting container: 9da69852cec5c98b4d4afab830eed3a9304b8c9cb909b9c5fa82381f94dd099e" id=236f7fa6-ef3a-45b3-a99e-4c225bf0632c name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 06:24:58 embed-certs-133470 crio[568]: time="2025-12-10T06:24:58.994628542Z" level=info msg="Started container" PID=1736 containerID=9da69852cec5c98b4d4afab830eed3a9304b8c9cb909b9c5fa82381f94dd099e description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dbsf6/dashboard-metrics-scraper id=236f7fa6-ef3a-45b3-a99e-4c225bf0632c name=/runtime.v1.RuntimeService/StartContainer sandboxID=c75bf0b937e2641ff4971c9fb95380d3bebe537000e0812f5b51c8baf29a5210
	Dec 10 06:24:59 embed-certs-133470 crio[568]: time="2025-12-10T06:24:59.093198147Z" level=info msg="Removing container: 40037b8d9e83afbdba48a3892e45813faf916ea4669240d875c83470d15614fa" id=413d91d3-9dde-4c52-b439-5d5625cefd3c name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 10 06:24:59 embed-certs-133470 crio[568]: time="2025-12-10T06:24:59.105960985Z" level=info msg="Removed container 40037b8d9e83afbdba48a3892e45813faf916ea4669240d875c83470d15614fa: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dbsf6/dashboard-metrics-scraper" id=413d91d3-9dde-4c52-b439-5d5625cefd3c name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 10 06:25:01 embed-certs-133470 crio[568]: time="2025-12-10T06:25:01.102811477Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=a060d036-d3f3-45b1-a5ce-b26ae70946c1 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:25:01 embed-certs-133470 crio[568]: time="2025-12-10T06:25:01.103938465Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=eba4a69e-538b-4153-9411-df3f26090362 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:25:01 embed-certs-133470 crio[568]: time="2025-12-10T06:25:01.105116619Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=1c8ecbc8-f3e5-4ad7-b841-0caa426fb8b0 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:25:01 embed-certs-133470 crio[568]: time="2025-12-10T06:25:01.105257732Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:25:01 embed-certs-133470 crio[568]: time="2025-12-10T06:25:01.112078138Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:25:01 embed-certs-133470 crio[568]: time="2025-12-10T06:25:01.112272621Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/2795afba9d86c3adc3340fada668a8a86e84724b0bc4a08d48623eeff3f4336d/merged/etc/passwd: no such file or directory"
	Dec 10 06:25:01 embed-certs-133470 crio[568]: time="2025-12-10T06:25:01.112304174Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/2795afba9d86c3adc3340fada668a8a86e84724b0bc4a08d48623eeff3f4336d/merged/etc/group: no such file or directory"
	Dec 10 06:25:01 embed-certs-133470 crio[568]: time="2025-12-10T06:25:01.112561278Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:25:01 embed-certs-133470 crio[568]: time="2025-12-10T06:25:01.144182707Z" level=info msg="Created container 2f4a3c5b106dc9be9345ac2e196e0149c6a49b366f48b0ae9bcc66efb6381bd7: kube-system/storage-provisioner/storage-provisioner" id=1c8ecbc8-f3e5-4ad7-b841-0caa426fb8b0 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:25:01 embed-certs-133470 crio[568]: time="2025-12-10T06:25:01.144918792Z" level=info msg="Starting container: 2f4a3c5b106dc9be9345ac2e196e0149c6a49b366f48b0ae9bcc66efb6381bd7" id=8e5c91a3-a5d0-4fdd-ae56-fc24a13e4d2f name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 06:25:01 embed-certs-133470 crio[568]: time="2025-12-10T06:25:01.146827671Z" level=info msg="Started container" PID=1750 containerID=2f4a3c5b106dc9be9345ac2e196e0149c6a49b366f48b0ae9bcc66efb6381bd7 description=kube-system/storage-provisioner/storage-provisioner id=8e5c91a3-a5d0-4fdd-ae56-fc24a13e4d2f name=/runtime.v1.RuntimeService/StartContainer sandboxID=fbb6a980af03349cb49876dc876a07aea3208a756c36d3d96198ec15a2ae1b89
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	2f4a3c5b106dc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           22 seconds ago      Running             storage-provisioner         1                   fbb6a980af033       storage-provisioner                          kube-system
	9da69852cec5c       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           24 seconds ago      Exited              dashboard-metrics-scraper   2                   c75bf0b937e26       dashboard-metrics-scraper-6ffb444bf9-dbsf6   kubernetes-dashboard
	b57c64e71446e       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   42 seconds ago      Running             kubernetes-dashboard        0                   363efcf96c231       kubernetes-dashboard-855c9754f9-tvh5q        kubernetes-dashboard
	9b2e134d00ffb       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           52 seconds ago      Running             busybox                     1                   50eb908c53546       busybox                                      default
	13e976b147ae7       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           52 seconds ago      Running             coredns                     0                   074bfbc38ab76       coredns-66bc5c9577-gw75x                     kube-system
	7ed6660ccf81b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           52 seconds ago      Exited              storage-provisioner         0                   fbb6a980af033       storage-provisioner                          kube-system
	0c31d45ef74fb       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           52 seconds ago      Running             kindnet-cni                 0                   7cda15afad876       kindnet-zhm6w                                kube-system
	e24ec95c65e2b       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                           52 seconds ago      Running             kube-proxy                  0                   e013c6bbad96b       kube-proxy-fkdk9                             kube-system
	d6469f0541702       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           57 seconds ago      Running             etcd                        0                   30268545a32c1       etcd-embed-certs-133470                      kube-system
	7648ffbcd0289       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                           57 seconds ago      Running             kube-apiserver              0                   de8aba7a4cb88       kube-apiserver-embed-certs-133470            kube-system
	1d978d02f9539       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                           57 seconds ago      Running             kube-scheduler              0                   d7190721d2d0e       kube-scheduler-embed-certs-133470            kube-system
	41ac6d073418d       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                           57 seconds ago      Running             kube-controller-manager     0                   20cfcf93ea42d       kube-controller-manager-embed-certs-133470   kube-system
	
	
	==> coredns [13e976b147ae71ac7ced68e8f9b72b5ec6754a28d1b1cf43d63103eda063a601] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50803 - 49741 "HINFO IN 6352376531880439996.1581486988354613661. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.062236852s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-133470
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-133470
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=edc6abd3c0573b88c7a02dc35aa0b985627fa3e9
	                    minikube.k8s.io/name=embed-certs-133470
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T06_23_31_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 06:23:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-133470
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 06:25:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 06:24:59 +0000   Wed, 10 Dec 2025 06:23:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 06:24:59 +0000   Wed, 10 Dec 2025 06:23:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 06:24:59 +0000   Wed, 10 Dec 2025 06:23:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Dec 2025 06:24:59 +0000   Wed, 10 Dec 2025 06:23:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-133470
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 0992b7e47f4f804d2f02c3066938a460
	  System UUID:                c679f347-b1a0-4ee9-b8eb-d12f4d1d4e6f
	  Boot ID:                    cce7104c-1270-4b6b-af66-b04ce0de633c
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-66bc5c9577-gw75x                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     107s
	  kube-system                 etcd-embed-certs-133470                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         113s
	  kube-system                 kindnet-zhm6w                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      107s
	  kube-system                 kube-apiserver-embed-certs-133470             250m (3%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-controller-manager-embed-certs-133470    200m (2%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-proxy-fkdk9                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-scheduler-embed-certs-133470             100m (1%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-dbsf6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-tvh5q         0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 106s                 kube-proxy       
	  Normal  Starting                 52s                  kube-proxy       
	  Normal  Starting                 118s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  118s (x8 over 118s)  kubelet          Node embed-certs-133470 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    118s (x8 over 118s)  kubelet          Node embed-certs-133470 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     118s (x8 over 118s)  kubelet          Node embed-certs-133470 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    113s                 kubelet          Node embed-certs-133470 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  113s                 kubelet          Node embed-certs-133470 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     113s                 kubelet          Node embed-certs-133470 status is now: NodeHasSufficientPID
	  Normal  Starting                 113s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           108s                 node-controller  Node embed-certs-133470 event: Registered Node embed-certs-133470 in Controller
	  Normal  NodeReady                96s                  kubelet          Node embed-certs-133470 status is now: NodeReady
	  Normal  Starting                 58s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  57s (x8 over 58s)    kubelet          Node embed-certs-133470 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    57s (x8 over 58s)    kubelet          Node embed-certs-133470 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     57s (x8 over 58s)    kubelet          Node embed-certs-133470 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           51s                  node-controller  Node embed-certs-133470 event: Registered Node embed-certs-133470 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 67 21 4d 5d 12 08 06
	[Dec10 06:21] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 7e b1 cc cb 4a c1 08 06
	[  +0.000451] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f6 67 21 4d 5d 12 08 06
	[ +47.984386] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 4e cf 84 e2 64 08 06
	[  +1.136322] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000001] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0e cf a5 c8 c4 7c 08 06
	[  +0.000001] ll header: 00000000: ff ff ff ff ff ff 4e 6d f9 62 31 99 08 06
	[Dec10 06:22] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 16 a6 95 b8 c2 8a 08 06
	[ +10.598490] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 42 35 90 e5 6e e9 08 06
	[  +0.000401] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 36 4e cf 84 e2 64 08 06
	[ +28.872835] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a 53 b5 51 38 03 08 06
	[  +0.000413] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4e 6d f9 62 31 99 08 06
	[  +9.820727] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6e c5 0b 85 ba 10 08 06
	[  +0.000485] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 16 a6 95 b8 c2 8a 08 06
	
	
	==> etcd [d6469f0541702fe81ba71666ade3d8b49b710a9889eeda64a30872196f87d79b] <==
	{"level":"warn","ts":"2025-12-10T06:24:27.936765Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:27.946657Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:27.958372Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:27.967365Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:27.977526Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:27.988644Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:27.997278Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:28.011163Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:28.023144Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:28.036334Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:28.045891Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:28.055777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:28.065795Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:28.083540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:28.093498Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:28.111902Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:28.124130Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:28.132775Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:28.142673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:28.152189Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:28.166570Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:28.174544Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:28.189666Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:28.197720Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:28.259278Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60978","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 06:25:23 up  1:07,  0 user,  load average: 4.73, 4.81, 3.05
	Linux embed-certs-133470 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0c31d45ef74fb05281a156cb4b2c1bfd08a7578166fa2e49f92b067ceba00ed4] <==
	I1210 06:24:31.006292       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1210 06:24:31.029517       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1210 06:24:31.029698       1 main.go:148] setting mtu 1500 for CNI 
	I1210 06:24:31.029722       1 main.go:178] kindnetd IP family: "ipv4"
	I1210 06:24:31.029748       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-10T06:24:31Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1210 06:24:31.232285       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1210 06:24:31.232655       1 controller.go:381] "Waiting for informer caches to sync"
	I1210 06:24:31.232675       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1210 06:24:31.232824       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1210 06:24:31.529300       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1210 06:24:31.529333       1 metrics.go:72] Registering metrics
	I1210 06:24:31.529859       1 controller.go:711] "Syncing nftables rules"
	I1210 06:24:41.232582       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1210 06:24:41.232679       1 main.go:301] handling current node
	I1210 06:24:51.231997       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1210 06:24:51.232053       1 main.go:301] handling current node
	I1210 06:25:01.232556       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1210 06:25:01.232602       1 main.go:301] handling current node
	I1210 06:25:11.234608       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1210 06:25:11.234659       1 main.go:301] handling current node
	I1210 06:25:21.241562       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1210 06:25:21.241600       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7648ffbcd0289f174298c84e0db8f9defb9c9e8f94bb12bce5d42d6204170ddf] <==
	I1210 06:24:28.933694       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1210 06:24:28.935252       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 06:24:28.948505       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1210 06:24:28.959519       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1210 06:24:28.959557       1 policy_source.go:240] refreshing policies
	I1210 06:24:28.960213       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1210 06:24:28.960405       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1210 06:24:28.984566       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1210 06:24:28.984824       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1210 06:24:28.984841       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1210 06:24:28.988020       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1210 06:24:28.993787       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1210 06:24:29.012058       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1210 06:24:29.073422       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1210 06:24:29.323446       1 controller.go:667] quota admission added evaluator for: namespaces
	I1210 06:24:29.361147       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1210 06:24:29.382620       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1210 06:24:29.391772       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1210 06:24:29.445798       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.119.228"}
	I1210 06:24:29.462461       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.139.223"}
	I1210 06:24:29.794964       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1210 06:24:32.577145       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1210 06:24:32.577203       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1210 06:24:32.628579       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1210 06:24:32.726184       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [41ac6d073418d2eb1af6e3c34750732dd3f22567edf771586f1f62db7cdeebd7] <==
	I1210 06:24:32.184687       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1210 06:24:32.188118       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1210 06:24:32.192431       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1210 06:24:32.200728       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1210 06:24:32.203897       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1210 06:24:32.207341       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1210 06:24:32.209623       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1210 06:24:32.209704       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1210 06:24:32.209727       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1210 06:24:32.210954       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1210 06:24:32.213232       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1210 06:24:32.222683       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1210 06:24:32.222692       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1210 06:24:32.222826       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1210 06:24:32.222834       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1210 06:24:32.222841       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1210 06:24:32.223074       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1210 06:24:32.223123       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1210 06:24:32.223159       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1210 06:24:32.226611       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1210 06:24:32.227813       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1210 06:24:32.232145       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 06:24:32.232175       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 06:24:32.238082       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1210 06:24:32.246555       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [e24ec95c65e2b7512bd846c71358432fa87dca45b70403bb1e0c9397e2e56dc8] <==
	I1210 06:24:30.853073       1 server_linux.go:53] "Using iptables proxy"
	I1210 06:24:30.919915       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1210 06:24:31.020301       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1210 06:24:31.020340       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1210 06:24:31.020482       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 06:24:31.042920       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1210 06:24:31.042996       1 server_linux.go:132] "Using iptables Proxier"
	I1210 06:24:31.048372       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 06:24:31.048751       1 server.go:527] "Version info" version="v1.34.2"
	I1210 06:24:31.048789       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 06:24:31.049918       1 config.go:200] "Starting service config controller"
	I1210 06:24:31.049942       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 06:24:31.049981       1 config.go:106] "Starting endpoint slice config controller"
	I1210 06:24:31.050027       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 06:24:31.049978       1 config.go:309] "Starting node config controller"
	I1210 06:24:31.050064       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 06:24:31.050072       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 06:24:31.050088       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 06:24:31.050094       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 06:24:31.150117       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1210 06:24:31.150136       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1210 06:24:31.150382       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [1d978d02f9539453ea47a09b2b2ab8fb9b27a2bf69492ed41a51cb35be1aa40c] <==
	I1210 06:24:27.779689       1 serving.go:386] Generated self-signed cert in-memory
	I1210 06:24:29.142726       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1210 06:24:29.142839       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 06:24:29.149365       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1210 06:24:29.149508       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1210 06:24:29.149541       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1210 06:24:29.149598       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 06:24:29.149615       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 06:24:29.149633       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1210 06:24:29.149641       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1210 06:24:29.149774       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1210 06:24:29.250358       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1210 06:24:29.250452       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1210 06:24:29.250595       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 10 06:24:32 embed-certs-133470 kubelet[729]: I1210 06:24:32.782279     729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkb2g\" (UniqueName: \"kubernetes.io/projected/dcca238d-1725-4e73-8fdb-96f099dc9285-kube-api-access-jkb2g\") pod \"dashboard-metrics-scraper-6ffb444bf9-dbsf6\" (UID: \"dcca238d-1725-4e73-8fdb-96f099dc9285\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dbsf6"
	Dec 10 06:24:32 embed-certs-133470 kubelet[729]: I1210 06:24:32.782310     729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxdvq\" (UniqueName: \"kubernetes.io/projected/91ce86a6-7d58-4648-9399-d3b07c7e250c-kube-api-access-wxdvq\") pod \"kubernetes-dashboard-855c9754f9-tvh5q\" (UID: \"91ce86a6-7d58-4648-9399-d3b07c7e250c\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-tvh5q"
	Dec 10 06:24:37 embed-certs-133470 kubelet[729]: I1210 06:24:37.021792     729 scope.go:117] "RemoveContainer" containerID="6c44a6626a3e41d882765fe33034ca61228403b394f76f66db57a50ffff07681"
	Dec 10 06:24:37 embed-certs-133470 kubelet[729]: I1210 06:24:37.274099     729 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 10 06:24:38 embed-certs-133470 kubelet[729]: I1210 06:24:38.027936     729 scope.go:117] "RemoveContainer" containerID="6c44a6626a3e41d882765fe33034ca61228403b394f76f66db57a50ffff07681"
	Dec 10 06:24:38 embed-certs-133470 kubelet[729]: I1210 06:24:38.028177     729 scope.go:117] "RemoveContainer" containerID="40037b8d9e83afbdba48a3892e45813faf916ea4669240d875c83470d15614fa"
	Dec 10 06:24:38 embed-certs-133470 kubelet[729]: E1210 06:24:38.028389     729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dbsf6_kubernetes-dashboard(dcca238d-1725-4e73-8fdb-96f099dc9285)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dbsf6" podUID="dcca238d-1725-4e73-8fdb-96f099dc9285"
	Dec 10 06:24:39 embed-certs-133470 kubelet[729]: I1210 06:24:39.032892     729 scope.go:117] "RemoveContainer" containerID="40037b8d9e83afbdba48a3892e45813faf916ea4669240d875c83470d15614fa"
	Dec 10 06:24:39 embed-certs-133470 kubelet[729]: E1210 06:24:39.033101     729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dbsf6_kubernetes-dashboard(dcca238d-1725-4e73-8fdb-96f099dc9285)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dbsf6" podUID="dcca238d-1725-4e73-8fdb-96f099dc9285"
	Dec 10 06:24:41 embed-certs-133470 kubelet[729]: I1210 06:24:41.061137     729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-tvh5q" podStartSLOduration=1.56082388 podStartE2EDuration="9.061112865s" podCreationTimestamp="2025-12-10 06:24:32 +0000 UTC" firstStartedPulling="2025-12-10 06:24:33.027131858 +0000 UTC m=+7.243099990" lastFinishedPulling="2025-12-10 06:24:40.52742086 +0000 UTC m=+14.743388975" observedRunningTime="2025-12-10 06:24:41.060961221 +0000 UTC m=+15.276929356" watchObservedRunningTime="2025-12-10 06:24:41.061112865 +0000 UTC m=+15.277081000"
	Dec 10 06:24:43 embed-certs-133470 kubelet[729]: I1210 06:24:43.662376     729 scope.go:117] "RemoveContainer" containerID="40037b8d9e83afbdba48a3892e45813faf916ea4669240d875c83470d15614fa"
	Dec 10 06:24:43 embed-certs-133470 kubelet[729]: E1210 06:24:43.662672     729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dbsf6_kubernetes-dashboard(dcca238d-1725-4e73-8fdb-96f099dc9285)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dbsf6" podUID="dcca238d-1725-4e73-8fdb-96f099dc9285"
	Dec 10 06:24:58 embed-certs-133470 kubelet[729]: I1210 06:24:58.939343     729 scope.go:117] "RemoveContainer" containerID="40037b8d9e83afbdba48a3892e45813faf916ea4669240d875c83470d15614fa"
	Dec 10 06:24:59 embed-certs-133470 kubelet[729]: I1210 06:24:59.091786     729 scope.go:117] "RemoveContainer" containerID="40037b8d9e83afbdba48a3892e45813faf916ea4669240d875c83470d15614fa"
	Dec 10 06:24:59 embed-certs-133470 kubelet[729]: I1210 06:24:59.092034     729 scope.go:117] "RemoveContainer" containerID="9da69852cec5c98b4d4afab830eed3a9304b8c9cb909b9c5fa82381f94dd099e"
	Dec 10 06:24:59 embed-certs-133470 kubelet[729]: E1210 06:24:59.092246     729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dbsf6_kubernetes-dashboard(dcca238d-1725-4e73-8fdb-96f099dc9285)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dbsf6" podUID="dcca238d-1725-4e73-8fdb-96f099dc9285"
	Dec 10 06:25:01 embed-certs-133470 kubelet[729]: I1210 06:25:01.102302     729 scope.go:117] "RemoveContainer" containerID="7ed6660ccf81b6a4976447ae69ba63d0e45dd08b146be33d81085a872b17b10e"
	Dec 10 06:25:03 embed-certs-133470 kubelet[729]: I1210 06:25:03.662858     729 scope.go:117] "RemoveContainer" containerID="9da69852cec5c98b4d4afab830eed3a9304b8c9cb909b9c5fa82381f94dd099e"
	Dec 10 06:25:03 embed-certs-133470 kubelet[729]: E1210 06:25:03.663126     729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dbsf6_kubernetes-dashboard(dcca238d-1725-4e73-8fdb-96f099dc9285)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dbsf6" podUID="dcca238d-1725-4e73-8fdb-96f099dc9285"
	Dec 10 06:25:15 embed-certs-133470 kubelet[729]: I1210 06:25:15.939119     729 scope.go:117] "RemoveContainer" containerID="9da69852cec5c98b4d4afab830eed3a9304b8c9cb909b9c5fa82381f94dd099e"
	Dec 10 06:25:15 embed-certs-133470 kubelet[729]: E1210 06:25:15.939360     729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dbsf6_kubernetes-dashboard(dcca238d-1725-4e73-8fdb-96f099dc9285)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dbsf6" podUID="dcca238d-1725-4e73-8fdb-96f099dc9285"
	Dec 10 06:25:21 embed-certs-133470 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 10 06:25:21 embed-certs-133470 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 10 06:25:21 embed-certs-133470 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:25:21 embed-certs-133470 systemd[1]: kubelet.service: Consumed 1.952s CPU time.
	
	
	==> kubernetes-dashboard [b57c64e71446e7e9d2ba0cd5b5c15928f33d9c4625a9b1fad5eeaa44af09c95e] <==
	2025/12/10 06:24:40 Using namespace: kubernetes-dashboard
	2025/12/10 06:24:40 Using in-cluster config to connect to apiserver
	2025/12/10 06:24:40 Using secret token for csrf signing
	2025/12/10 06:24:40 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/10 06:24:40 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/10 06:24:40 Successful initial request to the apiserver, version: v1.34.2
	2025/12/10 06:24:40 Generating JWE encryption key
	2025/12/10 06:24:40 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/10 06:24:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/10 06:24:40 Initializing JWE encryption key from synchronized object
	2025/12/10 06:24:40 Creating in-cluster Sidecar client
	2025/12/10 06:24:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/10 06:24:40 Serving insecurely on HTTP port: 9090
	2025/12/10 06:25:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/10 06:24:40 Starting overwatch
	
	
	==> storage-provisioner [2f4a3c5b106dc9be9345ac2e196e0149c6a49b366f48b0ae9bcc66efb6381bd7] <==
	I1210 06:25:01.160373       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1210 06:25:01.169379       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1210 06:25:01.169427       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1210 06:25:01.172026       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:25:04.627265       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:25:08.888323       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:25:12.487290       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:25:15.540972       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:25:18.564441       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:25:18.571834       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1210 06:25:18.572026       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1210 06:25:18.572170       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fcbf147d-e027-4c81-b883-f30651ab340b", APIVersion:"v1", ResourceVersion:"669", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-133470_d68b3d64-08c0-48f5-851a-9d7a3377f3d6 became leader
	I1210 06:25:18.572318       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-133470_d68b3d64-08c0-48f5-851a-9d7a3377f3d6!
	W1210 06:25:18.575910       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:25:18.582650       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1210 06:25:18.672943       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-133470_d68b3d64-08c0-48f5-851a-9d7a3377f3d6!
	W1210 06:25:20.587275       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:25:20.593237       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:25:22.599304       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:25:22.604678       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [7ed6660ccf81b6a4976447ae69ba63d0e45dd08b146be33d81085a872b17b10e] <==
	I1210 06:24:30.823311       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1210 06:25:00.829370       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-133470 -n embed-certs-133470
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-133470 -n embed-certs-133470: exit status 2 (403.754067ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-133470 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-133470
helpers_test.go:244: (dbg) docker inspect embed-certs-133470:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3a1f3f3228b1ec53cd9f63c675c9b5091d68de47bcdbf1b5b82a14243c07aa76",
	        "Created": "2025-12-10T06:23:10.449450924Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 328138,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T06:24:19.415792942Z",
	            "FinishedAt": "2025-12-10T06:24:18.002845647Z"
	        },
	        "Image": "sha256:9dfcc37acf4d8ed51daae49d651516447e95ced4bb0b0783e8c53cb79a74f008",
	        "ResolvConfPath": "/var/lib/docker/containers/3a1f3f3228b1ec53cd9f63c675c9b5091d68de47bcdbf1b5b82a14243c07aa76/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3a1f3f3228b1ec53cd9f63c675c9b5091d68de47bcdbf1b5b82a14243c07aa76/hostname",
	        "HostsPath": "/var/lib/docker/containers/3a1f3f3228b1ec53cd9f63c675c9b5091d68de47bcdbf1b5b82a14243c07aa76/hosts",
	        "LogPath": "/var/lib/docker/containers/3a1f3f3228b1ec53cd9f63c675c9b5091d68de47bcdbf1b5b82a14243c07aa76/3a1f3f3228b1ec53cd9f63c675c9b5091d68de47bcdbf1b5b82a14243c07aa76-json.log",
	        "Name": "/embed-certs-133470",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-133470:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-133470",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3a1f3f3228b1ec53cd9f63c675c9b5091d68de47bcdbf1b5b82a14243c07aa76",
	                "LowerDir": "/var/lib/docker/overlay2/438187e60f45e0a217a5260189d029ff21902b801168e01bb30941ed2d899de5-init/diff:/var/lib/docker/overlay2/5745aee6e8b05b3a4cc4ad6aee891df9d6438d830895f70bd2a764a976802708/diff",
	                "MergedDir": "/var/lib/docker/overlay2/438187e60f45e0a217a5260189d029ff21902b801168e01bb30941ed2d899de5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/438187e60f45e0a217a5260189d029ff21902b801168e01bb30941ed2d899de5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/438187e60f45e0a217a5260189d029ff21902b801168e01bb30941ed2d899de5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-133470",
	                "Source": "/var/lib/docker/volumes/embed-certs-133470/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-133470",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-133470",
	                "name.minikube.sigs.k8s.io": "embed-certs-133470",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "7a87b967b6b27e017e523853e7262f80307b066e123f5fdc5afeb839ae07e80e",
	            "SandboxKey": "/var/run/docker/netns/7a87b967b6b2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33124"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33125"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33126"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-133470": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c997c342a102de8ded4e3e9d1b30c87213863ef3e6af404e57b008495685711b",
	                    "EndpointID": "d238e09f509aa4bf9ecf223eb8a6beb98cacdf78bbf75077c543b3b6e868ca42",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "0e:6d:1b:43:62:4b",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-133470",
	                        "3a1f3f3228b1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-133470 -n embed-certs-133470
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-133470 -n embed-certs-133470: exit status 2 (357.336379ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-133470 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-133470 logs -n 25: (1.2100681s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ addons  │ enable dashboard -p old-k8s-version-424086 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ old-k8s-version-424086       │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
	│ start   │ -p old-k8s-version-424086 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-424086       │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:24 UTC │
	│ addons  │ enable metrics-server -p no-preload-713838 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-713838            │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-133470 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-133470           │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ stop    │ -p no-preload-713838 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-713838            │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:24 UTC │
	│ stop    │ -p embed-certs-133470 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-133470           │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:24 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-643991 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-643991 │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-643991 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-643991 │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:24 UTC │
	│ addons  │ enable dashboard -p no-preload-713838 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-713838            │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:24 UTC │
	│ start   │ -p no-preload-713838 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-713838            │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:25 UTC │
	│ addons  │ enable dashboard -p embed-certs-133470 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-133470           │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:24 UTC │
	│ start   │ -p embed-certs-133470 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-133470           │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:25 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-643991 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-643991 │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:24 UTC │
	│ start   │ -p default-k8s-diff-port-643991 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-643991 │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:25 UTC │
	│ image   │ old-k8s-version-424086 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-424086       │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:24 UTC │
	│ pause   │ -p old-k8s-version-424086 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-424086       │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │                     │
	│ delete  │ -p old-k8s-version-424086                                                                                                                                                                                                                            │ old-k8s-version-424086       │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:24 UTC │
	│ delete  │ -p old-k8s-version-424086                                                                                                                                                                                                                            │ old-k8s-version-424086       │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:24 UTC │
	│ start   │ -p newest-cni-126107 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-126107            │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │                     │
	│ image   │ no-preload-713838 image list --format=json                                                                                                                                                                                                           │ no-preload-713838            │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │ 10 Dec 25 06:25 UTC │
	│ pause   │ -p no-preload-713838 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-713838            │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │                     │
	│ delete  │ -p no-preload-713838                                                                                                                                                                                                                                 │ no-preload-713838            │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │ 10 Dec 25 06:25 UTC │
	│ image   │ embed-certs-133470 image list --format=json                                                                                                                                                                                                          │ embed-certs-133470           │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │ 10 Dec 25 06:25 UTC │
	│ pause   │ -p embed-certs-133470 --alsologtostderr -v=1                                                                                                                                                                                                         │ embed-certs-133470           │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │                     │
	│ delete  │ -p no-preload-713838                                                                                                                                                                                                                                 │ no-preload-713838            │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │ 10 Dec 25 06:25 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:24:59
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:24:59.327087  336887 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:24:59.327365  336887 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:24:59.327375  336887 out.go:374] Setting ErrFile to fd 2...
	I1210 06:24:59.327379  336887 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:24:59.327669  336887 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8832/.minikube/bin
	I1210 06:24:59.328143  336887 out.go:368] Setting JSON to false
	I1210 06:24:59.329429  336887 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4050,"bootTime":1765343849,"procs":361,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 06:24:59.329519  336887 start.go:143] virtualization: kvm guest
	I1210 06:24:59.331611  336887 out.go:179] * [newest-cni-126107] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 06:24:59.333096  336887 notify.go:221] Checking for updates...
	I1210 06:24:59.333116  336887 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 06:24:59.334447  336887 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:24:59.336068  336887 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-8832/kubeconfig
	I1210 06:24:59.337494  336887 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-8832/.minikube
	I1210 06:24:59.338960  336887 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 06:24:59.340340  336887 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:24:59.342187  336887 config.go:182] Loaded profile config "default-k8s-diff-port-643991": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 06:24:59.342330  336887 config.go:182] Loaded profile config "embed-certs-133470": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 06:24:59.342492  336887 config.go:182] Loaded profile config "no-preload-713838": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 06:24:59.342623  336887 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:24:59.369242  336887 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 06:24:59.369328  336887 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:24:59.432140  336887 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-10 06:24:59.420604919 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:24:59.432256  336887 docker.go:319] overlay module found
	I1210 06:24:59.435201  336887 out.go:179] * Using the docker driver based on user configuration
	W1210 06:24:55.887075  331193 pod_ready.go:104] pod "coredns-66bc5c9577-znsz6" is not "Ready", error: <nil>
	W1210 06:24:58.386507  331193 pod_ready.go:104] pod "coredns-66bc5c9577-znsz6" is not "Ready", error: <nil>
	I1210 06:24:59.436402  336887 start.go:309] selected driver: docker
	I1210 06:24:59.436415  336887 start.go:927] validating driver "docker" against <nil>
	I1210 06:24:59.436427  336887 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:24:59.436998  336887 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:24:59.496347  336887 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-10 06:24:59.486011226 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:24:59.496517  336887 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1210 06:24:59.496554  336887 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1210 06:24:59.496758  336887 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1210 06:24:59.499173  336887 out.go:179] * Using Docker driver with root privileges
	I1210 06:24:59.500516  336887 cni.go:84] Creating CNI manager for ""
	I1210 06:24:59.500598  336887 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:24:59.500612  336887 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1210 06:24:59.500684  336887 start.go:353] cluster config:
	{Name:newest-cni-126107 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-126107 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:24:59.502093  336887 out.go:179] * Starting "newest-cni-126107" primary control-plane node in "newest-cni-126107" cluster
	I1210 06:24:59.503450  336887 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 06:24:59.504798  336887 out.go:179] * Pulling base image v0.0.48-1765319469-22089 ...
	I1210 06:24:59.506022  336887 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1210 06:24:59.506091  336887 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-8832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1210 06:24:59.506102  336887 cache.go:65] Caching tarball of preloaded images
	I1210 06:24:59.506114  336887 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon
	I1210 06:24:59.506191  336887 preload.go:238] Found /home/jenkins/minikube-integration/22089-8832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 06:24:59.506203  336887 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1210 06:24:59.506300  336887 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/config.json ...
	I1210 06:24:59.506323  336887 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/config.json: {Name:mkdf58f074b298e370024a6ce1eb0198fc1a1932 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:24:59.529599  336887 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon, skipping pull
	I1210 06:24:59.529619  336887 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca exists in daemon, skipping load
	I1210 06:24:59.529645  336887 cache.go:243] Successfully downloaded all kic artifacts
	I1210 06:24:59.529672  336887 start.go:360] acquireMachinesLock for newest-cni-126107: {Name:mk95835e60131d01841dcfa433d5776bf10a491c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:24:59.529766  336887 start.go:364] duration metric: took 78.432µs to acquireMachinesLock for "newest-cni-126107"
	I1210 06:24:59.529787  336887 start.go:93] Provisioning new machine with config: &{Name:newest-cni-126107 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-126107 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 06:24:59.529851  336887 start.go:125] createHost starting for "" (driver="docker")
	W1210 06:24:58.946860  326955 pod_ready.go:104] pod "coredns-7d764666f9-hr4gk" is not "Ready", error: <nil>
	I1210 06:25:00.446892  326955 pod_ready.go:94] pod "coredns-7d764666f9-hr4gk" is "Ready"
	I1210 06:25:00.446917  326955 pod_ready.go:86] duration metric: took 31.006503405s for pod "coredns-7d764666f9-hr4gk" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:00.449783  326955 pod_ready.go:83] waiting for pod "etcd-no-preload-713838" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:00.454644  326955 pod_ready.go:94] pod "etcd-no-preload-713838" is "Ready"
	I1210 06:25:00.454673  326955 pod_ready.go:86] duration metric: took 4.863318ms for pod "etcd-no-preload-713838" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:00.457203  326955 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-713838" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:00.462197  326955 pod_ready.go:94] pod "kube-apiserver-no-preload-713838" is "Ready"
	I1210 06:25:00.462227  326955 pod_ready.go:86] duration metric: took 4.996726ms for pod "kube-apiserver-no-preload-713838" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:00.464859  326955 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-713838" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:00.643687  326955 pod_ready.go:94] pod "kube-controller-manager-no-preload-713838" is "Ready"
	I1210 06:25:00.643711  326955 pod_ready.go:86] duration metric: took 178.834657ms for pod "kube-controller-manager-no-preload-713838" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:00.844018  326955 pod_ready.go:83] waiting for pod "kube-proxy-c62hk" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:01.244075  326955 pod_ready.go:94] pod "kube-proxy-c62hk" is "Ready"
	I1210 06:25:01.244105  326955 pod_ready.go:86] duration metric: took 400.060427ms for pod "kube-proxy-c62hk" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:01.445041  326955 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-713838" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:01.843827  326955 pod_ready.go:94] pod "kube-scheduler-no-preload-713838" is "Ready"
	I1210 06:25:01.843854  326955 pod_ready.go:86] duration metric: took 398.788804ms for pod "kube-scheduler-no-preload-713838" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:01.843867  326955 pod_ready.go:40] duration metric: took 32.407570406s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 06:25:01.891782  326955 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1210 06:25:01.897299  326955 out.go:179] * Done! kubectl is now configured to use "no-preload-713838" cluster and "default" namespace by default
	W1210 06:25:00.080872  327833 pod_ready.go:104] pod "coredns-66bc5c9577-gw75x" is not "Ready", error: <nil>
	W1210 06:25:02.579615  327833 pod_ready.go:104] pod "coredns-66bc5c9577-gw75x" is not "Ready", error: <nil>
	I1210 06:24:59.532875  336887 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1210 06:24:59.533186  336887 start.go:159] libmachine.API.Create for "newest-cni-126107" (driver="docker")
	I1210 06:24:59.533225  336887 client.go:173] LocalClient.Create starting
	I1210 06:24:59.533327  336887 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem
	I1210 06:24:59.533388  336887 main.go:143] libmachine: Decoding PEM data...
	I1210 06:24:59.533416  336887 main.go:143] libmachine: Parsing certificate...
	I1210 06:24:59.533500  336887 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22089-8832/.minikube/certs/cert.pem
	I1210 06:24:59.533540  336887 main.go:143] libmachine: Decoding PEM data...
	I1210 06:24:59.533557  336887 main.go:143] libmachine: Parsing certificate...
	I1210 06:24:59.533982  336887 cli_runner.go:164] Run: docker network inspect newest-cni-126107 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 06:24:59.552885  336887 cli_runner.go:211] docker network inspect newest-cni-126107 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 06:24:59.552988  336887 network_create.go:284] running [docker network inspect newest-cni-126107] to gather additional debugging logs...
	I1210 06:24:59.553008  336887 cli_runner.go:164] Run: docker network inspect newest-cni-126107
	W1210 06:24:59.572451  336887 cli_runner.go:211] docker network inspect newest-cni-126107 returned with exit code 1
	I1210 06:24:59.572534  336887 network_create.go:287] error running [docker network inspect newest-cni-126107]: docker network inspect newest-cni-126107: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-126107 not found
	I1210 06:24:59.572551  336887 network_create.go:289] output of [docker network inspect newest-cni-126107]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-126107 not found
	
	** /stderr **
	I1210 06:24:59.572710  336887 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:24:59.592775  336887 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-93569dd44e03 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:22:34:6b:89:a0:37} reservation:<nil>}
	I1210 06:24:59.593342  336887 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-2fbfa5ca31a8 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:6a:30:9e:0a:da:73} reservation:<nil>}
	I1210 06:24:59.594133  336887 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-68b4fc4b224b IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:0a:d7:21:69:83} reservation:<nil>}
	I1210 06:24:59.594915  336887 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-0a24a8ad90ff IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:32:ea:e5:16:4c:6f} reservation:<nil>}
	I1210 06:24:59.595927  336887 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001dd18e0}
	I1210 06:24:59.595955  336887 network_create.go:124] attempt to create docker network newest-cni-126107 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1210 06:24:59.596007  336887 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-126107 newest-cni-126107
	I1210 06:24:59.648242  336887 network_create.go:108] docker network newest-cni-126107 192.168.85.0/24 created
	I1210 06:24:59.648276  336887 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-126107" container
	I1210 06:24:59.648334  336887 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 06:24:59.667592  336887 cli_runner.go:164] Run: docker volume create newest-cni-126107 --label name.minikube.sigs.k8s.io=newest-cni-126107 --label created_by.minikube.sigs.k8s.io=true
	I1210 06:24:59.686982  336887 oci.go:103] Successfully created a docker volume newest-cni-126107
	I1210 06:24:59.687084  336887 cli_runner.go:164] Run: docker run --rm --name newest-cni-126107-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-126107 --entrypoint /usr/bin/test -v newest-cni-126107:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca -d /var/lib
	I1210 06:25:00.115171  336887 oci.go:107] Successfully prepared a docker volume newest-cni-126107
	I1210 06:25:00.115245  336887 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1210 06:25:00.115259  336887 kic.go:194] Starting extracting preloaded images to volume ...
	I1210 06:25:00.115360  336887 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22089-8832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-126107:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca -I lz4 -xf /preloaded.tar -C /extractDir
	I1210 06:25:04.112675  336887 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22089-8832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-126107:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca -I lz4 -xf /preloaded.tar -C /extractDir: (3.997248616s)
	I1210 06:25:04.112712  336887 kic.go:203] duration metric: took 3.997449096s to extract preloaded images to volume ...
	W1210 06:25:04.112837  336887 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1210 06:25:04.112877  336887 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1210 06:25:04.112928  336887 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 06:25:04.172016  336887 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-126107 --name newest-cni-126107 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-126107 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-126107 --network newest-cni-126107 --ip 192.168.85.2 --volume newest-cni-126107:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca
	W1210 06:25:00.387118  331193 pod_ready.go:104] pod "coredns-66bc5c9577-znsz6" is not "Ready", error: <nil>
	W1210 06:25:02.917573  331193 pod_ready.go:104] pod "coredns-66bc5c9577-znsz6" is not "Ready", error: <nil>
	W1210 06:25:04.579873  327833 pod_ready.go:104] pod "coredns-66bc5c9577-gw75x" is not "Ready", error: <nil>
	W1210 06:25:06.580394  327833 pod_ready.go:104] pod "coredns-66bc5c9577-gw75x" is not "Ready", error: <nil>
	I1210 06:25:07.580576  327833 pod_ready.go:94] pod "coredns-66bc5c9577-gw75x" is "Ready"
	I1210 06:25:07.580605  327833 pod_ready.go:86] duration metric: took 37.506619554s for pod "coredns-66bc5c9577-gw75x" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:07.583509  327833 pod_ready.go:83] waiting for pod "etcd-embed-certs-133470" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:07.587865  327833 pod_ready.go:94] pod "etcd-embed-certs-133470" is "Ready"
	I1210 06:25:07.587890  327833 pod_ready.go:86] duration metric: took 4.359471ms for pod "etcd-embed-certs-133470" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:07.590170  327833 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-133470" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:07.594746  327833 pod_ready.go:94] pod "kube-apiserver-embed-certs-133470" is "Ready"
	I1210 06:25:07.594774  327833 pod_ready.go:86] duration metric: took 4.57905ms for pod "kube-apiserver-embed-certs-133470" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:07.596975  327833 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-133470" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:07.778320  327833 pod_ready.go:94] pod "kube-controller-manager-embed-certs-133470" is "Ready"
	I1210 06:25:07.778347  327833 pod_ready.go:86] duration metric: took 181.346408ms for pod "kube-controller-manager-embed-certs-133470" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:07.979006  327833 pod_ready.go:83] waiting for pod "kube-proxy-fkdk9" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:08.378607  327833 pod_ready.go:94] pod "kube-proxy-fkdk9" is "Ready"
	I1210 06:25:08.378631  327833 pod_ready.go:86] duration metric: took 399.601345ms for pod "kube-proxy-fkdk9" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:08.578014  327833 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-133470" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:08.978761  327833 pod_ready.go:94] pod "kube-scheduler-embed-certs-133470" is "Ready"
	I1210 06:25:08.978787  327833 pod_ready.go:86] duration metric: took 400.749384ms for pod "kube-scheduler-embed-certs-133470" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:08.978798  327833 pod_ready.go:40] duration metric: took 38.909473428s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 06:25:09.028286  327833 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1210 06:25:09.030218  327833 out.go:179] * Done! kubectl is now configured to use "embed-certs-133470" cluster and "default" namespace by default
	I1210 06:25:04.481386  336887 cli_runner.go:164] Run: docker container inspect newest-cni-126107 --format={{.State.Running}}
	I1210 06:25:04.502244  336887 cli_runner.go:164] Run: docker container inspect newest-cni-126107 --format={{.State.Status}}
	I1210 06:25:04.522735  336887 cli_runner.go:164] Run: docker exec newest-cni-126107 stat /var/lib/dpkg/alternatives/iptables
	I1210 06:25:04.571010  336887 oci.go:144] the created container "newest-cni-126107" has a running status.
	I1210 06:25:04.571044  336887 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22089-8832/.minikube/machines/newest-cni-126107/id_rsa...
	I1210 06:25:04.663409  336887 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22089-8832/.minikube/machines/newest-cni-126107/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 06:25:04.690550  336887 cli_runner.go:164] Run: docker container inspect newest-cni-126107 --format={{.State.Status}}
	I1210 06:25:04.713575  336887 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 06:25:04.713604  336887 kic_runner.go:114] Args: [docker exec --privileged newest-cni-126107 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 06:25:04.767064  336887 cli_runner.go:164] Run: docker container inspect newest-cni-126107 --format={{.State.Status}}
	I1210 06:25:04.791773  336887 machine.go:94] provisionDockerMachine start ...
	I1210 06:25:04.791873  336887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:04.819325  336887 main.go:143] libmachine: Using SSH client type: native
	I1210 06:25:04.819813  336887 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1210 06:25:04.819834  336887 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 06:25:04.820667  336887 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1210 06:25:07.958166  336887 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-126107
	
	I1210 06:25:07.958195  336887 ubuntu.go:182] provisioning hostname "newest-cni-126107"
	I1210 06:25:07.958260  336887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:07.980501  336887 main.go:143] libmachine: Using SSH client type: native
	I1210 06:25:07.980710  336887 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1210 06:25:07.980728  336887 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-126107 && echo "newest-cni-126107" | sudo tee /etc/hostname
	I1210 06:25:08.127040  336887 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-126107
	
	I1210 06:25:08.127128  336887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:08.147687  336887 main.go:143] libmachine: Using SSH client type: native
	I1210 06:25:08.147963  336887 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1210 06:25:08.147982  336887 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-126107' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-126107/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-126107' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 06:25:08.283513  336887 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:25:08.283545  336887 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22089-8832/.minikube CaCertPath:/home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22089-8832/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22089-8832/.minikube}
	I1210 06:25:08.283569  336887 ubuntu.go:190] setting up certificates
	I1210 06:25:08.283582  336887 provision.go:84] configureAuth start
	I1210 06:25:08.283641  336887 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-126107
	I1210 06:25:08.304777  336887 provision.go:143] copyHostCerts
	I1210 06:25:08.304859  336887 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-8832/.minikube/ca.pem, removing ...
	I1210 06:25:08.304870  336887 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-8832/.minikube/ca.pem
	I1210 06:25:08.304943  336887 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22089-8832/.minikube/ca.pem (1078 bytes)
	I1210 06:25:08.305028  336887 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-8832/.minikube/cert.pem, removing ...
	I1210 06:25:08.305036  336887 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-8832/.minikube/cert.pem
	I1210 06:25:08.305061  336887 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22089-8832/.minikube/cert.pem (1123 bytes)
	I1210 06:25:08.305130  336887 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-8832/.minikube/key.pem, removing ...
	I1210 06:25:08.305138  336887 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-8832/.minikube/key.pem
	I1210 06:25:08.305161  336887 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22089-8832/.minikube/key.pem (1675 bytes)
	I1210 06:25:08.305231  336887 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22089-8832/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca-key.pem org=jenkins.newest-cni-126107 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-126107]
	I1210 06:25:08.358046  336887 provision.go:177] copyRemoteCerts
	I1210 06:25:08.358115  336887 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 06:25:08.358153  336887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:08.378428  336887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/newest-cni-126107/id_rsa Username:docker}
	I1210 06:25:08.475365  336887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 06:25:08.497101  336887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 06:25:08.517033  336887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 06:25:08.536354  336887 provision.go:87] duration metric: took 252.752199ms to configureAuth
	I1210 06:25:08.536379  336887 ubuntu.go:206] setting minikube options for container-runtime
	I1210 06:25:08.536554  336887 config.go:182] Loaded profile config "newest-cni-126107": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 06:25:08.536656  336887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:08.556388  336887 main.go:143] libmachine: Using SSH client type: native
	I1210 06:25:08.556749  336887 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1210 06:25:08.556781  336887 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 06:25:08.835275  336887 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 06:25:08.835301  336887 machine.go:97] duration metric: took 4.043503325s to provisionDockerMachine
	I1210 06:25:08.835313  336887 client.go:176] duration metric: took 9.302078213s to LocalClient.Create
	I1210 06:25:08.835335  336887 start.go:167] duration metric: took 9.302149263s to libmachine.API.Create "newest-cni-126107"
	I1210 06:25:08.835345  336887 start.go:293] postStartSetup for "newest-cni-126107" (driver="docker")
	I1210 06:25:08.835361  336887 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 06:25:08.835432  336887 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 06:25:08.835497  336887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:08.855854  336887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/newest-cni-126107/id_rsa Username:docker}
	I1210 06:25:08.956961  336887 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 06:25:08.961167  336887 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 06:25:08.961201  336887 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 06:25:08.961213  336887 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-8832/.minikube/addons for local assets ...
	I1210 06:25:08.961271  336887 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-8832/.minikube/files for local assets ...
	I1210 06:25:08.961344  336887 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-8832/.minikube/files/etc/ssl/certs/123742.pem -> 123742.pem in /etc/ssl/certs
	I1210 06:25:08.961433  336887 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 06:25:08.970695  336887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/files/etc/ssl/certs/123742.pem --> /etc/ssl/certs/123742.pem (1708 bytes)
	I1210 06:25:08.995442  336887 start.go:296] duration metric: took 160.082878ms for postStartSetup
	I1210 06:25:08.995880  336887 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-126107
	I1210 06:25:09.016559  336887 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/config.json ...
	I1210 06:25:09.016908  336887 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:25:09.016964  336887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:09.038838  336887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/newest-cni-126107/id_rsa Username:docker}
	I1210 06:25:09.139907  336887 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 06:25:09.145902  336887 start.go:128] duration metric: took 9.616033039s to createHost
	I1210 06:25:09.145930  336887 start.go:83] releasing machines lock for "newest-cni-126107", held for 9.616152275s
	I1210 06:25:09.146007  336887 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-126107
	I1210 06:25:09.166587  336887 ssh_runner.go:195] Run: cat /version.json
	I1210 06:25:09.166650  336887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:09.166669  336887 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 06:25:09.166759  336887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:09.189521  336887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/newest-cni-126107/id_rsa Username:docker}
	I1210 06:25:09.189525  336887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/newest-cni-126107/id_rsa Username:docker}
	I1210 06:25:09.284007  336887 ssh_runner.go:195] Run: systemctl --version
	W1210 06:25:05.386403  331193 pod_ready.go:104] pod "coredns-66bc5c9577-znsz6" is not "Ready", error: <nil>
	W1210 06:25:07.387202  331193 pod_ready.go:104] pod "coredns-66bc5c9577-znsz6" is not "Ready", error: <nil>
	W1210 06:25:09.387389  331193 pod_ready.go:104] pod "coredns-66bc5c9577-znsz6" is not "Ready", error: <nil>
	I1210 06:25:09.351948  336887 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 06:25:09.392017  336887 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 06:25:09.397100  336887 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 06:25:09.397159  336887 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 06:25:09.426437  336887 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 06:25:09.426486  336887 start.go:496] detecting cgroup driver to use...
	I1210 06:25:09.426524  336887 detect.go:190] detected "systemd" cgroup driver on host os
	I1210 06:25:09.426570  336887 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 06:25:09.444100  336887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 06:25:09.457503  336887 docker.go:218] disabling cri-docker service (if available) ...
	I1210 06:25:09.457569  336887 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 06:25:09.475303  336887 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 06:25:09.495265  336887 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 06:25:09.584209  336887 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 06:25:09.673201  336887 docker.go:234] disabling docker service ...
	I1210 06:25:09.673262  336887 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 06:25:09.692964  336887 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 06:25:09.706562  336887 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 06:25:09.794361  336887 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 06:25:09.886009  336887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 06:25:09.899964  336887 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:25:09.915638  336887 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 06:25:09.915690  336887 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:25:09.927534  336887 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1210 06:25:09.927591  336887 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:25:09.937774  336887 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:25:09.947722  336887 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:25:09.957780  336887 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 06:25:09.967038  336887 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:25:09.977926  336887 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:25:09.993658  336887 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:25:10.003638  336887 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 06:25:10.012100  336887 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 06:25:10.021305  336887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:25:10.110274  336887 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 06:25:10.246619  336887 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 06:25:10.246690  336887 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 06:25:10.251096  336887 start.go:564] Will wait 60s for crictl version
	I1210 06:25:10.251165  336887 ssh_runner.go:195] Run: which crictl
	I1210 06:25:10.255306  336887 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 06:25:10.283066  336887 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 06:25:10.283157  336887 ssh_runner.go:195] Run: crio --version
	I1210 06:25:10.313027  336887 ssh_runner.go:195] Run: crio --version
	I1210 06:25:10.346493  336887 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1210 06:25:10.348155  336887 cli_runner.go:164] Run: docker network inspect newest-cni-126107 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:25:10.367398  336887 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1210 06:25:10.371843  336887 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:25:10.385684  336887 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1210 06:25:10.387117  336887 kubeadm.go:884] updating cluster {Name:newest-cni-126107 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-126107 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 06:25:10.387245  336887 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1210 06:25:10.387300  336887 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:25:10.421783  336887 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 06:25:10.421805  336887 crio.go:433] Images already preloaded, skipping extraction
	I1210 06:25:10.421852  336887 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:25:10.448367  336887 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 06:25:10.448389  336887 cache_images.go:86] Images are preloaded, skipping loading
	I1210 06:25:10.448395  336887 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 crio true true} ...
	I1210 06:25:10.448494  336887 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-126107 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-126107 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 06:25:10.448573  336887 ssh_runner.go:195] Run: crio config
	I1210 06:25:10.498037  336887 cni.go:84] Creating CNI manager for ""
	I1210 06:25:10.498063  336887 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:25:10.498081  336887 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1210 06:25:10.498120  336887 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-126107 NodeName:newest-cni-126107 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 06:25:10.498246  336887 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-126107"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 06:25:10.498306  336887 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1210 06:25:10.507229  336887 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 06:25:10.507302  336887 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 06:25:10.516385  336887 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1210 06:25:10.530854  336887 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1210 06:25:10.548260  336887 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1210 06:25:10.563281  336887 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1210 06:25:10.567436  336887 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:25:10.578747  336887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:25:10.660880  336887 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:25:10.688248  336887 certs.go:69] Setting up /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107 for IP: 192.168.85.2
	I1210 06:25:10.688268  336887 certs.go:195] generating shared ca certs ...
	I1210 06:25:10.688286  336887 certs.go:227] acquiring lock for ca certs: {Name:mkfe434cecfa5233603e8d01fb39a21abb4f8ca4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:25:10.688431  336887 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22089-8832/.minikube/ca.key
	I1210 06:25:10.688526  336887 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22089-8832/.minikube/proxy-client-ca.key
	I1210 06:25:10.688544  336887 certs.go:257] generating profile certs ...
	I1210 06:25:10.688612  336887 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/client.key
	I1210 06:25:10.688636  336887 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/client.crt with IP's: []
	I1210 06:25:10.813463  336887 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/client.crt ...
	I1210 06:25:10.813530  336887 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/client.crt: {Name:mk7009f3bf80c2397e5ae6cdebdca2735a7f7b63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:25:10.813756  336887 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/client.key ...
	I1210 06:25:10.813772  336887 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/client.key: {Name:mk6d255207a819b82a749c48b0009054007ff91d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:25:10.813864  336887 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/apiserver.key.23b909bf
	I1210 06:25:10.813882  336887 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/apiserver.crt.23b909bf with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1210 06:25:11.022417  336887 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/apiserver.crt.23b909bf ...
	I1210 06:25:11.022443  336887 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/apiserver.crt.23b909bf: {Name:mk09a2e21f902ac4eed926780c1f90cb426b5a2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:25:11.022619  336887 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/apiserver.key.23b909bf ...
	I1210 06:25:11.022632  336887 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/apiserver.key.23b909bf: {Name:mkc73ed6c35fb6a21244daf518e5b2d0a7440a3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:25:11.022704  336887 certs.go:382] copying /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/apiserver.crt.23b909bf -> /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/apiserver.crt
	I1210 06:25:11.022778  336887 certs.go:386] copying /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/apiserver.key.23b909bf -> /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/apiserver.key
	I1210 06:25:11.022831  336887 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/proxy-client.key
	I1210 06:25:11.022848  336887 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/proxy-client.crt with IP's: []
	I1210 06:25:11.088507  336887 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/proxy-client.crt ...
	I1210 06:25:11.088534  336887 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/proxy-client.crt: {Name:mkdd3c9abbfeb78fdbbafdaf53f324a4a2e625ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:25:11.088686  336887 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/proxy-client.key ...
	I1210 06:25:11.088699  336887 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/proxy-client.key: {Name:mkd22ad5ae4429236c87cce8641338a9393df47a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:25:11.088869  336887 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/12374.pem (1338 bytes)
	W1210 06:25:11.088906  336887 certs.go:480] ignoring /home/jenkins/minikube-integration/22089-8832/.minikube/certs/12374_empty.pem, impossibly tiny 0 bytes
	I1210 06:25:11.088917  336887 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca-key.pem (1679 bytes)
	I1210 06:25:11.088939  336887 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem (1078 bytes)
	I1210 06:25:11.088963  336887 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/cert.pem (1123 bytes)
	I1210 06:25:11.088988  336887 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/key.pem (1675 bytes)
	I1210 06:25:11.089034  336887 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/files/etc/ssl/certs/123742.pem (1708 bytes)
	I1210 06:25:11.089621  336887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 06:25:11.108552  336887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 06:25:11.127416  336887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 06:25:11.146079  336887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 06:25:11.164732  336887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 06:25:11.183864  336887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 06:25:11.202457  336887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 06:25:11.221380  336887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 06:25:11.241165  336887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/files/etc/ssl/certs/123742.pem --> /usr/share/ca-certificates/123742.pem (1708 bytes)
	I1210 06:25:11.262201  336887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 06:25:11.282304  336887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/certs/12374.pem --> /usr/share/ca-certificates/12374.pem (1338 bytes)
	I1210 06:25:11.302104  336887 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 06:25:11.316208  336887 ssh_runner.go:195] Run: openssl version
	I1210 06:25:11.323011  336887 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12374.pem
	I1210 06:25:11.331150  336887 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12374.pem /etc/ssl/certs/12374.pem
	I1210 06:25:11.339353  336887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12374.pem
	I1210 06:25:11.343453  336887 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:52 /usr/share/ca-certificates/12374.pem
	I1210 06:25:11.343539  336887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12374.pem
	I1210 06:25:11.378191  336887 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 06:25:11.387532  336887 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/12374.pem /etc/ssl/certs/51391683.0
	I1210 06:25:11.395709  336887 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/123742.pem
	I1210 06:25:11.403915  336887 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/123742.pem /etc/ssl/certs/123742.pem
	I1210 06:25:11.413083  336887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/123742.pem
	I1210 06:25:11.417256  336887 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:52 /usr/share/ca-certificates/123742.pem
	I1210 06:25:11.417315  336887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/123742.pem
	I1210 06:25:11.452744  336887 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 06:25:11.460975  336887 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/123742.pem /etc/ssl/certs/3ec20f2e.0
	I1210 06:25:11.468848  336887 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:25:11.477072  336887 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 06:25:11.485572  336887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:25:11.490083  336887 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:25:11.490144  336887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:25:11.529873  336887 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 06:25:11.538675  336887 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 06:25:11.547942  336887 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:25:11.552437  336887 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 06:25:11.552529  336887 kubeadm.go:401] StartCluster: {Name:newest-cni-126107 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-126107 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:25:11.552617  336887 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 06:25:11.552673  336887 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:25:11.582819  336887 cri.go:89] found id: ""
	I1210 06:25:11.582893  336887 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 06:25:11.591576  336887 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 06:25:11.600085  336887 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 06:25:11.600143  336887 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 06:25:11.608700  336887 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 06:25:11.608723  336887 kubeadm.go:158] found existing configuration files:
	
	I1210 06:25:11.608773  336887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 06:25:11.617207  336887 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 06:25:11.617265  336887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 06:25:11.625691  336887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 06:25:11.634058  336887 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 06:25:11.634138  336887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 06:25:11.642174  336887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 06:25:11.650696  336887 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 06:25:11.650751  336887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 06:25:11.658854  336887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 06:25:11.667261  336887 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 06:25:11.667309  336887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 06:25:11.675445  336887 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 06:25:11.717793  336887 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1210 06:25:11.717857  336887 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 06:25:11.787773  336887 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 06:25:11.787862  336887 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1210 06:25:11.787918  336887 kubeadm.go:319] OS: Linux
	I1210 06:25:11.788013  336887 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 06:25:11.788088  336887 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 06:25:11.788209  336887 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 06:25:11.788287  336887 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 06:25:11.788329  336887 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 06:25:11.788400  336887 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 06:25:11.788501  336887 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 06:25:11.788573  336887 kubeadm.go:319] CGROUPS_IO: enabled
	I1210 06:25:11.851680  336887 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 06:25:11.851818  336887 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 06:25:11.851989  336887 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 06:25:11.859860  336887 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 06:25:11.863122  336887 out.go:252]   - Generating certificates and keys ...
	I1210 06:25:11.863226  336887 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 06:25:11.863328  336887 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 06:25:11.994891  336887 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 06:25:12.216319  336887 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 06:25:12.263074  336887 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 06:25:12.317348  336887 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 06:25:12.348525  336887 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 06:25:12.348673  336887 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-126107] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1210 06:25:12.453542  336887 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 06:25:12.453734  336887 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-126107] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1210 06:25:12.554979  336887 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 06:25:12.639691  336887 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 06:25:12.675769  336887 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 06:25:12.675887  336887 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 06:25:12.733954  336887 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 06:25:12.762974  336887 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 06:25:12.895579  336887 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 06:25:12.968568  336887 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 06:25:13.242877  336887 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 06:25:13.243493  336887 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 06:25:13.247727  336887 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 06:25:13.249454  336887 out.go:252]   - Booting up control plane ...
	I1210 06:25:13.249584  336887 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 06:25:13.249689  336887 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 06:25:13.249772  336887 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 06:25:13.266130  336887 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 06:25:13.266243  336887 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 06:25:13.273740  336887 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 06:25:13.274070  336887 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 06:25:13.274119  336887 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 06:25:13.387904  336887 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 06:25:13.388113  336887 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 06:25:13.888860  336887 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 500.995328ms
	I1210 06:25:13.892049  336887 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1210 06:25:13.892166  336887 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1210 06:25:13.892313  336887 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1210 06:25:13.892419  336887 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1210 06:25:11.887626  331193 pod_ready.go:104] pod "coredns-66bc5c9577-znsz6" is not "Ready", error: <nil>
	W1210 06:25:14.389916  331193 pod_ready.go:104] pod "coredns-66bc5c9577-znsz6" is not "Ready", error: <nil>
	I1210 06:25:14.896145  336887 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.004021858s
	I1210 06:25:16.123662  336887 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.231275136s
	I1210 06:25:17.894620  336887 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.00240365s
	I1210 06:25:17.919519  336887 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 06:25:17.933110  336887 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 06:25:17.946133  336887 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 06:25:17.946406  336887 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-126107 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 06:25:17.956662  336887 kubeadm.go:319] [bootstrap-token] Using token: x794l4.dwxrqyazh7co8i2b
	I1210 06:25:17.958956  336887 out.go:252]   - Configuring RBAC rules ...
	I1210 06:25:17.959110  336887 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 06:25:17.962931  336887 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 06:25:17.970206  336887 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 06:25:17.974857  336887 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 06:25:17.978201  336887 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 06:25:17.981820  336887 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 06:25:18.305622  336887 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 06:25:18.724389  336887 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1210 06:25:19.303999  336887 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1210 06:25:19.305073  336887 kubeadm.go:319] 
	I1210 06:25:19.305166  336887 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1210 06:25:19.305178  336887 kubeadm.go:319] 
	I1210 06:25:19.305276  336887 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1210 06:25:19.305284  336887 kubeadm.go:319] 
	I1210 06:25:19.305325  336887 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1210 06:25:19.305407  336887 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 06:25:19.305518  336887 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 06:25:19.305539  336887 kubeadm.go:319] 
	I1210 06:25:19.305612  336887 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1210 06:25:19.305622  336887 kubeadm.go:319] 
	I1210 06:25:19.305692  336887 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 06:25:19.305701  336887 kubeadm.go:319] 
	I1210 06:25:19.305779  336887 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1210 06:25:19.305879  336887 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 06:25:19.305980  336887 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 06:25:19.305989  336887 kubeadm.go:319] 
	I1210 06:25:19.306147  336887 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 06:25:19.306259  336887 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1210 06:25:19.306268  336887 kubeadm.go:319] 
	I1210 06:25:19.306392  336887 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token x794l4.dwxrqyazh7co8i2b \
	I1210 06:25:19.306553  336887 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:63e262019a0228173b835d7feaf739daf8c2f986042fc20415163ebad5fe89a5 \
	I1210 06:25:19.306586  336887 kubeadm.go:319] 	--control-plane 
	I1210 06:25:19.306595  336887 kubeadm.go:319] 
	I1210 06:25:19.306723  336887 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1210 06:25:19.306738  336887 kubeadm.go:319] 
	I1210 06:25:19.306834  336887 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token x794l4.dwxrqyazh7co8i2b \
	I1210 06:25:19.306968  336887 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:63e262019a0228173b835d7feaf739daf8c2f986042fc20415163ebad5fe89a5 
	I1210 06:25:19.309760  336887 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1210 06:25:19.309893  336887 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 06:25:19.309921  336887 cni.go:84] Creating CNI manager for ""
	I1210 06:25:19.309935  336887 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:25:19.312593  336887 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1210 06:25:19.314078  336887 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1210 06:25:19.319527  336887 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl ...
	I1210 06:25:19.319547  336887 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	W1210 06:25:16.888133  331193 pod_ready.go:104] pod "coredns-66bc5c9577-znsz6" is not "Ready", error: <nil>
	W1210 06:25:19.387854  331193 pod_ready.go:104] pod "coredns-66bc5c9577-znsz6" is not "Ready", error: <nil>
	I1210 06:25:20.387613  331193 pod_ready.go:94] pod "coredns-66bc5c9577-znsz6" is "Ready"
	I1210 06:25:20.387649  331193 pod_ready.go:86] duration metric: took 37.506338739s for pod "coredns-66bc5c9577-znsz6" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:20.390589  331193 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-643991" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:20.394950  331193 pod_ready.go:94] pod "etcd-default-k8s-diff-port-643991" is "Ready"
	I1210 06:25:20.394970  331193 pod_ready.go:86] duration metric: took 4.358753ms for pod "etcd-default-k8s-diff-port-643991" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:20.397078  331193 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-643991" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:20.401552  331193 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-643991" is "Ready"
	I1210 06:25:20.401582  331193 pod_ready.go:86] duration metric: took 4.480286ms for pod "kube-apiserver-default-k8s-diff-port-643991" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:20.403436  331193 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-643991" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:20.586026  331193 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-643991" is "Ready"
	I1210 06:25:20.586066  331193 pod_ready.go:86] duration metric: took 182.609502ms for pod "kube-controller-manager-default-k8s-diff-port-643991" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:20.785406  331193 pod_ready.go:83] waiting for pod "kube-proxy-mkpzc" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:21.185282  331193 pod_ready.go:94] pod "kube-proxy-mkpzc" is "Ready"
	I1210 06:25:21.185312  331193 pod_ready.go:86] duration metric: took 399.878814ms for pod "kube-proxy-mkpzc" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:21.385632  331193 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-643991" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:21.785630  331193 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-643991" is "Ready"
	I1210 06:25:21.785657  331193 pod_ready.go:86] duration metric: took 399.99741ms for pod "kube-scheduler-default-k8s-diff-port-643991" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:21.785671  331193 pod_ready.go:40] duration metric: took 38.908172562s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 06:25:21.838180  331193 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1210 06:25:21.841707  331193 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-643991" cluster and "default" namespace by default
	I1210 06:25:19.335897  336887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1210 06:25:19.579059  336887 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 06:25:19.579223  336887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:25:19.579345  336887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-126107 minikube.k8s.io/updated_at=2025_12_10T06_25_19_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=edc6abd3c0573b88c7a02dc35aa0b985627fa3e9 minikube.k8s.io/name=newest-cni-126107 minikube.k8s.io/primary=true
	I1210 06:25:19.671172  336887 ops.go:34] apiserver oom_adj: -16
	I1210 06:25:19.671176  336887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:25:20.171713  336887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:25:20.671765  336887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:25:21.171646  336887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:25:21.672261  336887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:25:22.171664  336887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:25:22.671695  336887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:25:23.172215  336887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:25:23.672135  336887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:25:24.172113  336887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:25:24.262343  336887 kubeadm.go:1114] duration metric: took 4.683154391s to wait for elevateKubeSystemPrivileges
	I1210 06:25:24.262384  336887 kubeadm.go:403] duration metric: took 12.709859653s to StartCluster
	I1210 06:25:24.262405  336887 settings.go:142] acquiring lock: {Name:mkcfa52e2e09cf8266d26c2d1d1f162454a79515 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:25:24.262534  336887 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22089-8832/kubeconfig
	I1210 06:25:24.264079  336887 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/kubeconfig: {Name:mk2d0febd8c6a30a71f02d20e2057fd6d147cd76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:25:24.264340  336887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1210 06:25:24.264341  336887 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 06:25:24.264448  336887 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 06:25:24.264587  336887 config.go:182] Loaded profile config "newest-cni-126107": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 06:25:24.264612  336887 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-126107"
	I1210 06:25:24.264617  336887 addons.go:70] Setting default-storageclass=true in profile "newest-cni-126107"
	I1210 06:25:24.264637  336887 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-126107"
	I1210 06:25:24.264658  336887 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-126107"
	I1210 06:25:24.264677  336887 host.go:66] Checking if "newest-cni-126107" exists ...
	I1210 06:25:24.265032  336887 cli_runner.go:164] Run: docker container inspect newest-cni-126107 --format={{.State.Status}}
	I1210 06:25:24.265187  336887 cli_runner.go:164] Run: docker container inspect newest-cni-126107 --format={{.State.Status}}
	I1210 06:25:24.265961  336887 out.go:179] * Verifying Kubernetes components...
	I1210 06:25:24.267358  336887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:25:24.295239  336887 addons.go:239] Setting addon default-storageclass=true in "newest-cni-126107"
	I1210 06:25:24.295286  336887 host.go:66] Checking if "newest-cni-126107" exists ...
	I1210 06:25:24.295784  336887 cli_runner.go:164] Run: docker container inspect newest-cni-126107 --format={{.State.Status}}
	I1210 06:25:24.299744  336887 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:25:24.301894  336887 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:25:24.301980  336887 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 06:25:24.302108  336887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:24.322876  336887 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 06:25:24.322986  336887 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 06:25:24.323121  336887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:24.336990  336887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/newest-cni-126107/id_rsa Username:docker}
	I1210 06:25:24.354804  336887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/newest-cni-126107/id_rsa Username:docker}
	I1210 06:25:24.367293  336887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1210 06:25:24.433694  336887 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:25:24.458686  336887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:25:24.476281  336887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:25:24.587395  336887 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1210 06:25:24.591341  336887 api_server.go:52] waiting for apiserver process to appear ...
	I1210 06:25:24.591877  336887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:25:24.817163  336887 api_server.go:72] duration metric: took 552.788691ms to wait for apiserver process to appear ...
	I1210 06:25:24.817199  336887 api_server.go:88] waiting for apiserver healthz status ...
	I1210 06:25:24.817218  336887 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1210 06:25:24.822887  336887 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1210 06:25:24.823795  336887 api_server.go:141] control plane version: v1.35.0-beta.0
	I1210 06:25:24.823817  336887 api_server.go:131] duration metric: took 6.611695ms to wait for apiserver health ...
	I1210 06:25:24.823827  336887 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 06:25:24.824951  336887 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	
	
	==> CRI-O <==
	Dec 10 06:24:41 embed-certs-133470 crio[568]: time="2025-12-10T06:24:41.255283618Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 10 06:24:41 embed-certs-133470 crio[568]: time="2025-12-10T06:24:41.263029369Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 10 06:24:41 embed-certs-133470 crio[568]: time="2025-12-10T06:24:41.263070304Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 10 06:24:58 embed-certs-133470 crio[568]: time="2025-12-10T06:24:58.939949314Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=b34fdfe1-ebba-4483-81e1-52488fedd961 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:24:58 embed-certs-133470 crio[568]: time="2025-12-10T06:24:58.941010657Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=8af157f9-6adc-46e4-ab85-7204c9907afe name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:24:58 embed-certs-133470 crio[568]: time="2025-12-10T06:24:58.942036291Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dbsf6/dashboard-metrics-scraper" id=1e92ce76-b6ac-485d-89bf-757ccd4e18b7 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:24:58 embed-certs-133470 crio[568]: time="2025-12-10T06:24:58.942187192Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:24:58 embed-certs-133470 crio[568]: time="2025-12-10T06:24:58.948731496Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:24:58 embed-certs-133470 crio[568]: time="2025-12-10T06:24:58.949374863Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:24:58 embed-certs-133470 crio[568]: time="2025-12-10T06:24:58.991580942Z" level=info msg="Created container 9da69852cec5c98b4d4afab830eed3a9304b8c9cb909b9c5fa82381f94dd099e: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dbsf6/dashboard-metrics-scraper" id=1e92ce76-b6ac-485d-89bf-757ccd4e18b7 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:24:58 embed-certs-133470 crio[568]: time="2025-12-10T06:24:58.992270241Z" level=info msg="Starting container: 9da69852cec5c98b4d4afab830eed3a9304b8c9cb909b9c5fa82381f94dd099e" id=236f7fa6-ef3a-45b3-a99e-4c225bf0632c name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 06:24:58 embed-certs-133470 crio[568]: time="2025-12-10T06:24:58.994628542Z" level=info msg="Started container" PID=1736 containerID=9da69852cec5c98b4d4afab830eed3a9304b8c9cb909b9c5fa82381f94dd099e description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dbsf6/dashboard-metrics-scraper id=236f7fa6-ef3a-45b3-a99e-4c225bf0632c name=/runtime.v1.RuntimeService/StartContainer sandboxID=c75bf0b937e2641ff4971c9fb95380d3bebe537000e0812f5b51c8baf29a5210
	Dec 10 06:24:59 embed-certs-133470 crio[568]: time="2025-12-10T06:24:59.093198147Z" level=info msg="Removing container: 40037b8d9e83afbdba48a3892e45813faf916ea4669240d875c83470d15614fa" id=413d91d3-9dde-4c52-b439-5d5625cefd3c name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 10 06:24:59 embed-certs-133470 crio[568]: time="2025-12-10T06:24:59.105960985Z" level=info msg="Removed container 40037b8d9e83afbdba48a3892e45813faf916ea4669240d875c83470d15614fa: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dbsf6/dashboard-metrics-scraper" id=413d91d3-9dde-4c52-b439-5d5625cefd3c name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 10 06:25:01 embed-certs-133470 crio[568]: time="2025-12-10T06:25:01.102811477Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=a060d036-d3f3-45b1-a5ce-b26ae70946c1 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:25:01 embed-certs-133470 crio[568]: time="2025-12-10T06:25:01.103938465Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=eba4a69e-538b-4153-9411-df3f26090362 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:25:01 embed-certs-133470 crio[568]: time="2025-12-10T06:25:01.105116619Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=1c8ecbc8-f3e5-4ad7-b841-0caa426fb8b0 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:25:01 embed-certs-133470 crio[568]: time="2025-12-10T06:25:01.105257732Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:25:01 embed-certs-133470 crio[568]: time="2025-12-10T06:25:01.112078138Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:25:01 embed-certs-133470 crio[568]: time="2025-12-10T06:25:01.112272621Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/2795afba9d86c3adc3340fada668a8a86e84724b0bc4a08d48623eeff3f4336d/merged/etc/passwd: no such file or directory"
	Dec 10 06:25:01 embed-certs-133470 crio[568]: time="2025-12-10T06:25:01.112304174Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/2795afba9d86c3adc3340fada668a8a86e84724b0bc4a08d48623eeff3f4336d/merged/etc/group: no such file or directory"
	Dec 10 06:25:01 embed-certs-133470 crio[568]: time="2025-12-10T06:25:01.112561278Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:25:01 embed-certs-133470 crio[568]: time="2025-12-10T06:25:01.144182707Z" level=info msg="Created container 2f4a3c5b106dc9be9345ac2e196e0149c6a49b366f48b0ae9bcc66efb6381bd7: kube-system/storage-provisioner/storage-provisioner" id=1c8ecbc8-f3e5-4ad7-b841-0caa426fb8b0 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:25:01 embed-certs-133470 crio[568]: time="2025-12-10T06:25:01.144918792Z" level=info msg="Starting container: 2f4a3c5b106dc9be9345ac2e196e0149c6a49b366f48b0ae9bcc66efb6381bd7" id=8e5c91a3-a5d0-4fdd-ae56-fc24a13e4d2f name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 06:25:01 embed-certs-133470 crio[568]: time="2025-12-10T06:25:01.146827671Z" level=info msg="Started container" PID=1750 containerID=2f4a3c5b106dc9be9345ac2e196e0149c6a49b366f48b0ae9bcc66efb6381bd7 description=kube-system/storage-provisioner/storage-provisioner id=8e5c91a3-a5d0-4fdd-ae56-fc24a13e4d2f name=/runtime.v1.RuntimeService/StartContainer sandboxID=fbb6a980af03349cb49876dc876a07aea3208a756c36d3d96198ec15a2ae1b89
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	2f4a3c5b106dc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           24 seconds ago      Running             storage-provisioner         1                   fbb6a980af033       storage-provisioner                          kube-system
	9da69852cec5c       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           26 seconds ago      Exited              dashboard-metrics-scraper   2                   c75bf0b937e26       dashboard-metrics-scraper-6ffb444bf9-dbsf6   kubernetes-dashboard
	b57c64e71446e       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   45 seconds ago      Running             kubernetes-dashboard        0                   363efcf96c231       kubernetes-dashboard-855c9754f9-tvh5q        kubernetes-dashboard
	9b2e134d00ffb       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           54 seconds ago      Running             busybox                     1                   50eb908c53546       busybox                                      default
	13e976b147ae7       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           54 seconds ago      Running             coredns                     0                   074bfbc38ab76       coredns-66bc5c9577-gw75x                     kube-system
	7ed6660ccf81b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           54 seconds ago      Exited              storage-provisioner         0                   fbb6a980af033       storage-provisioner                          kube-system
	0c31d45ef74fb       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           54 seconds ago      Running             kindnet-cni                 0                   7cda15afad876       kindnet-zhm6w                                kube-system
	e24ec95c65e2b       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                           54 seconds ago      Running             kube-proxy                  0                   e013c6bbad96b       kube-proxy-fkdk9                             kube-system
	d6469f0541702       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           59 seconds ago      Running             etcd                        0                   30268545a32c1       etcd-embed-certs-133470                      kube-system
	7648ffbcd0289       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                           59 seconds ago      Running             kube-apiserver              0                   de8aba7a4cb88       kube-apiserver-embed-certs-133470            kube-system
	1d978d02f9539       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                           59 seconds ago      Running             kube-scheduler              0                   d7190721d2d0e       kube-scheduler-embed-certs-133470            kube-system
	41ac6d073418d       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                           59 seconds ago      Running             kube-controller-manager     0                   20cfcf93ea42d       kube-controller-manager-embed-certs-133470   kube-system
	
	
	==> coredns [13e976b147ae71ac7ced68e8f9b72b5ec6754a28d1b1cf43d63103eda063a601] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50803 - 49741 "HINFO IN 6352376531880439996.1581486988354613661. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.062236852s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-133470
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-133470
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=edc6abd3c0573b88c7a02dc35aa0b985627fa3e9
	                    minikube.k8s.io/name=embed-certs-133470
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T06_23_31_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 06:23:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-133470
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 06:25:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 06:24:59 +0000   Wed, 10 Dec 2025 06:23:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 06:24:59 +0000   Wed, 10 Dec 2025 06:23:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 06:24:59 +0000   Wed, 10 Dec 2025 06:23:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Dec 2025 06:24:59 +0000   Wed, 10 Dec 2025 06:23:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-133470
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 0992b7e47f4f804d2f02c3066938a460
	  System UUID:                c679f347-b1a0-4ee9-b8eb-d12f4d1d4e6f
	  Boot ID:                    cce7104c-1270-4b6b-af66-b04ce0de633c
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-66bc5c9577-gw75x                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     109s
	  kube-system                 etcd-embed-certs-133470                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         115s
	  kube-system                 kindnet-zhm6w                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      109s
	  kube-system                 kube-apiserver-embed-certs-133470             250m (3%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-controller-manager-embed-certs-133470    200m (2%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-proxy-fkdk9                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-embed-certs-133470             100m (1%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-dbsf6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-tvh5q         0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 108s               kube-proxy       
	  Normal  Starting                 54s                kube-proxy       
	  Normal  Starting                 2m                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m (x8 over 2m)    kubelet          Node embed-certs-133470 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m (x8 over 2m)    kubelet          Node embed-certs-133470 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m (x8 over 2m)    kubelet          Node embed-certs-133470 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    115s               kubelet          Node embed-certs-133470 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  115s               kubelet          Node embed-certs-133470 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     115s               kubelet          Node embed-certs-133470 status is now: NodeHasSufficientPID
	  Normal  Starting                 115s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           110s               node-controller  Node embed-certs-133470 event: Registered Node embed-certs-133470 in Controller
	  Normal  NodeReady                98s                kubelet          Node embed-certs-133470 status is now: NodeReady
	  Normal  Starting                 60s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  59s (x8 over 60s)  kubelet          Node embed-certs-133470 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s (x8 over 60s)  kubelet          Node embed-certs-133470 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s (x8 over 60s)  kubelet          Node embed-certs-133470 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           53s                node-controller  Node embed-certs-133470 event: Registered Node embed-certs-133470 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 67 21 4d 5d 12 08 06
	[Dec10 06:21] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 7e b1 cc cb 4a c1 08 06
	[  +0.000451] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f6 67 21 4d 5d 12 08 06
	[ +47.984386] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 4e cf 84 e2 64 08 06
	[  +1.136322] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000001] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0e cf a5 c8 c4 7c 08 06
	[  +0.000001] ll header: 00000000: ff ff ff ff ff ff 4e 6d f9 62 31 99 08 06
	[Dec10 06:22] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 16 a6 95 b8 c2 8a 08 06
	[ +10.598490] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 42 35 90 e5 6e e9 08 06
	[  +0.000401] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 36 4e cf 84 e2 64 08 06
	[ +28.872835] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a 53 b5 51 38 03 08 06
	[  +0.000413] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4e 6d f9 62 31 99 08 06
	[  +9.820727] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6e c5 0b 85 ba 10 08 06
	[  +0.000485] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 16 a6 95 b8 c2 8a 08 06
	
	
	==> etcd [d6469f0541702fe81ba71666ade3d8b49b710a9889eeda64a30872196f87d79b] <==
	{"level":"warn","ts":"2025-12-10T06:24:27.936765Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:27.946657Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:27.958372Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:27.967365Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:27.977526Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:27.988644Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:27.997278Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:28.011163Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:28.023144Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:28.036334Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:28.045891Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:28.055777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:28.065795Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:28.083540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:28.093498Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:28.111902Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:28.124130Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:28.132775Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:28.142673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:28.152189Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:28.166570Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:28.174544Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:28.189666Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:28.197720Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:28.259278Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60978","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 06:25:25 up  1:07,  0 user,  load average: 4.99, 4.86, 3.07
	Linux embed-certs-133470 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0c31d45ef74fb05281a156cb4b2c1bfd08a7578166fa2e49f92b067ceba00ed4] <==
	I1210 06:24:31.006292       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1210 06:24:31.029517       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1210 06:24:31.029698       1 main.go:148] setting mtu 1500 for CNI 
	I1210 06:24:31.029722       1 main.go:178] kindnetd IP family: "ipv4"
	I1210 06:24:31.029748       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-10T06:24:31Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1210 06:24:31.232285       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1210 06:24:31.232655       1 controller.go:381] "Waiting for informer caches to sync"
	I1210 06:24:31.232675       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1210 06:24:31.232824       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1210 06:24:31.529300       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1210 06:24:31.529333       1 metrics.go:72] Registering metrics
	I1210 06:24:31.529859       1 controller.go:711] "Syncing nftables rules"
	I1210 06:24:41.232582       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1210 06:24:41.232679       1 main.go:301] handling current node
	I1210 06:24:51.231997       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1210 06:24:51.232053       1 main.go:301] handling current node
	I1210 06:25:01.232556       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1210 06:25:01.232602       1 main.go:301] handling current node
	I1210 06:25:11.234608       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1210 06:25:11.234659       1 main.go:301] handling current node
	I1210 06:25:21.241562       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1210 06:25:21.241600       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7648ffbcd0289f174298c84e0db8f9defb9c9e8f94bb12bce5d42d6204170ddf] <==
	I1210 06:24:28.933694       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1210 06:24:28.935252       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 06:24:28.948505       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1210 06:24:28.959519       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1210 06:24:28.959557       1 policy_source.go:240] refreshing policies
	I1210 06:24:28.960213       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1210 06:24:28.960405       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1210 06:24:28.984566       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1210 06:24:28.984824       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1210 06:24:28.984841       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1210 06:24:28.988020       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1210 06:24:28.993787       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1210 06:24:29.012058       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1210 06:24:29.073422       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1210 06:24:29.323446       1 controller.go:667] quota admission added evaluator for: namespaces
	I1210 06:24:29.361147       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1210 06:24:29.382620       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1210 06:24:29.391772       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1210 06:24:29.445798       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.119.228"}
	I1210 06:24:29.462461       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.139.223"}
	I1210 06:24:29.794964       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1210 06:24:32.577145       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1210 06:24:32.577203       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1210 06:24:32.628579       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1210 06:24:32.726184       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [41ac6d073418d2eb1af6e3c34750732dd3f22567edf771586f1f62db7cdeebd7] <==
	I1210 06:24:32.184687       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1210 06:24:32.188118       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1210 06:24:32.192431       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1210 06:24:32.200728       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1210 06:24:32.203897       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1210 06:24:32.207341       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1210 06:24:32.209623       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1210 06:24:32.209704       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1210 06:24:32.209727       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1210 06:24:32.210954       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1210 06:24:32.213232       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1210 06:24:32.222683       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1210 06:24:32.222692       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1210 06:24:32.222826       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1210 06:24:32.222834       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1210 06:24:32.222841       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1210 06:24:32.223074       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1210 06:24:32.223123       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1210 06:24:32.223159       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1210 06:24:32.226611       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1210 06:24:32.227813       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1210 06:24:32.232145       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 06:24:32.232175       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 06:24:32.238082       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1210 06:24:32.246555       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [e24ec95c65e2b7512bd846c71358432fa87dca45b70403bb1e0c9397e2e56dc8] <==
	I1210 06:24:30.853073       1 server_linux.go:53] "Using iptables proxy"
	I1210 06:24:30.919915       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1210 06:24:31.020301       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1210 06:24:31.020340       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1210 06:24:31.020482       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 06:24:31.042920       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1210 06:24:31.042996       1 server_linux.go:132] "Using iptables Proxier"
	I1210 06:24:31.048372       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 06:24:31.048751       1 server.go:527] "Version info" version="v1.34.2"
	I1210 06:24:31.048789       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 06:24:31.049918       1 config.go:200] "Starting service config controller"
	I1210 06:24:31.049942       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 06:24:31.049981       1 config.go:106] "Starting endpoint slice config controller"
	I1210 06:24:31.050027       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 06:24:31.049978       1 config.go:309] "Starting node config controller"
	I1210 06:24:31.050064       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 06:24:31.050072       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 06:24:31.050088       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 06:24:31.050094       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 06:24:31.150117       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1210 06:24:31.150136       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1210 06:24:31.150382       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [1d978d02f9539453ea47a09b2b2ab8fb9b27a2bf69492ed41a51cb35be1aa40c] <==
	I1210 06:24:27.779689       1 serving.go:386] Generated self-signed cert in-memory
	I1210 06:24:29.142726       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1210 06:24:29.142839       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 06:24:29.149365       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1210 06:24:29.149508       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1210 06:24:29.149541       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1210 06:24:29.149598       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 06:24:29.149615       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 06:24:29.149633       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1210 06:24:29.149641       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1210 06:24:29.149774       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1210 06:24:29.250358       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1210 06:24:29.250452       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1210 06:24:29.250595       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 10 06:24:32 embed-certs-133470 kubelet[729]: I1210 06:24:32.782279     729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkb2g\" (UniqueName: \"kubernetes.io/projected/dcca238d-1725-4e73-8fdb-96f099dc9285-kube-api-access-jkb2g\") pod \"dashboard-metrics-scraper-6ffb444bf9-dbsf6\" (UID: \"dcca238d-1725-4e73-8fdb-96f099dc9285\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dbsf6"
	Dec 10 06:24:32 embed-certs-133470 kubelet[729]: I1210 06:24:32.782310     729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxdvq\" (UniqueName: \"kubernetes.io/projected/91ce86a6-7d58-4648-9399-d3b07c7e250c-kube-api-access-wxdvq\") pod \"kubernetes-dashboard-855c9754f9-tvh5q\" (UID: \"91ce86a6-7d58-4648-9399-d3b07c7e250c\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-tvh5q"
	Dec 10 06:24:37 embed-certs-133470 kubelet[729]: I1210 06:24:37.021792     729 scope.go:117] "RemoveContainer" containerID="6c44a6626a3e41d882765fe33034ca61228403b394f76f66db57a50ffff07681"
	Dec 10 06:24:37 embed-certs-133470 kubelet[729]: I1210 06:24:37.274099     729 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 10 06:24:38 embed-certs-133470 kubelet[729]: I1210 06:24:38.027936     729 scope.go:117] "RemoveContainer" containerID="6c44a6626a3e41d882765fe33034ca61228403b394f76f66db57a50ffff07681"
	Dec 10 06:24:38 embed-certs-133470 kubelet[729]: I1210 06:24:38.028177     729 scope.go:117] "RemoveContainer" containerID="40037b8d9e83afbdba48a3892e45813faf916ea4669240d875c83470d15614fa"
	Dec 10 06:24:38 embed-certs-133470 kubelet[729]: E1210 06:24:38.028389     729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dbsf6_kubernetes-dashboard(dcca238d-1725-4e73-8fdb-96f099dc9285)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dbsf6" podUID="dcca238d-1725-4e73-8fdb-96f099dc9285"
	Dec 10 06:24:39 embed-certs-133470 kubelet[729]: I1210 06:24:39.032892     729 scope.go:117] "RemoveContainer" containerID="40037b8d9e83afbdba48a3892e45813faf916ea4669240d875c83470d15614fa"
	Dec 10 06:24:39 embed-certs-133470 kubelet[729]: E1210 06:24:39.033101     729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dbsf6_kubernetes-dashboard(dcca238d-1725-4e73-8fdb-96f099dc9285)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dbsf6" podUID="dcca238d-1725-4e73-8fdb-96f099dc9285"
	Dec 10 06:24:41 embed-certs-133470 kubelet[729]: I1210 06:24:41.061137     729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-tvh5q" podStartSLOduration=1.56082388 podStartE2EDuration="9.061112865s" podCreationTimestamp="2025-12-10 06:24:32 +0000 UTC" firstStartedPulling="2025-12-10 06:24:33.027131858 +0000 UTC m=+7.243099990" lastFinishedPulling="2025-12-10 06:24:40.52742086 +0000 UTC m=+14.743388975" observedRunningTime="2025-12-10 06:24:41.060961221 +0000 UTC m=+15.276929356" watchObservedRunningTime="2025-12-10 06:24:41.061112865 +0000 UTC m=+15.277081000"
	Dec 10 06:24:43 embed-certs-133470 kubelet[729]: I1210 06:24:43.662376     729 scope.go:117] "RemoveContainer" containerID="40037b8d9e83afbdba48a3892e45813faf916ea4669240d875c83470d15614fa"
	Dec 10 06:24:43 embed-certs-133470 kubelet[729]: E1210 06:24:43.662672     729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dbsf6_kubernetes-dashboard(dcca238d-1725-4e73-8fdb-96f099dc9285)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dbsf6" podUID="dcca238d-1725-4e73-8fdb-96f099dc9285"
	Dec 10 06:24:58 embed-certs-133470 kubelet[729]: I1210 06:24:58.939343     729 scope.go:117] "RemoveContainer" containerID="40037b8d9e83afbdba48a3892e45813faf916ea4669240d875c83470d15614fa"
	Dec 10 06:24:59 embed-certs-133470 kubelet[729]: I1210 06:24:59.091786     729 scope.go:117] "RemoveContainer" containerID="40037b8d9e83afbdba48a3892e45813faf916ea4669240d875c83470d15614fa"
	Dec 10 06:24:59 embed-certs-133470 kubelet[729]: I1210 06:24:59.092034     729 scope.go:117] "RemoveContainer" containerID="9da69852cec5c98b4d4afab830eed3a9304b8c9cb909b9c5fa82381f94dd099e"
	Dec 10 06:24:59 embed-certs-133470 kubelet[729]: E1210 06:24:59.092246     729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dbsf6_kubernetes-dashboard(dcca238d-1725-4e73-8fdb-96f099dc9285)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dbsf6" podUID="dcca238d-1725-4e73-8fdb-96f099dc9285"
	Dec 10 06:25:01 embed-certs-133470 kubelet[729]: I1210 06:25:01.102302     729 scope.go:117] "RemoveContainer" containerID="7ed6660ccf81b6a4976447ae69ba63d0e45dd08b146be33d81085a872b17b10e"
	Dec 10 06:25:03 embed-certs-133470 kubelet[729]: I1210 06:25:03.662858     729 scope.go:117] "RemoveContainer" containerID="9da69852cec5c98b4d4afab830eed3a9304b8c9cb909b9c5fa82381f94dd099e"
	Dec 10 06:25:03 embed-certs-133470 kubelet[729]: E1210 06:25:03.663126     729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dbsf6_kubernetes-dashboard(dcca238d-1725-4e73-8fdb-96f099dc9285)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dbsf6" podUID="dcca238d-1725-4e73-8fdb-96f099dc9285"
	Dec 10 06:25:15 embed-certs-133470 kubelet[729]: I1210 06:25:15.939119     729 scope.go:117] "RemoveContainer" containerID="9da69852cec5c98b4d4afab830eed3a9304b8c9cb909b9c5fa82381f94dd099e"
	Dec 10 06:25:15 embed-certs-133470 kubelet[729]: E1210 06:25:15.939360     729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dbsf6_kubernetes-dashboard(dcca238d-1725-4e73-8fdb-96f099dc9285)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dbsf6" podUID="dcca238d-1725-4e73-8fdb-96f099dc9285"
	Dec 10 06:25:21 embed-certs-133470 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 10 06:25:21 embed-certs-133470 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 10 06:25:21 embed-certs-133470 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:25:21 embed-certs-133470 systemd[1]: kubelet.service: Consumed 1.952s CPU time.
	
	
	==> kubernetes-dashboard [b57c64e71446e7e9d2ba0cd5b5c15928f33d9c4625a9b1fad5eeaa44af09c95e] <==
	2025/12/10 06:24:40 Using namespace: kubernetes-dashboard
	2025/12/10 06:24:40 Using in-cluster config to connect to apiserver
	2025/12/10 06:24:40 Using secret token for csrf signing
	2025/12/10 06:24:40 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/10 06:24:40 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/10 06:24:40 Successful initial request to the apiserver, version: v1.34.2
	2025/12/10 06:24:40 Generating JWE encryption key
	2025/12/10 06:24:40 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/10 06:24:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/10 06:24:40 Initializing JWE encryption key from synchronized object
	2025/12/10 06:24:40 Creating in-cluster Sidecar client
	2025/12/10 06:24:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/10 06:24:40 Serving insecurely on HTTP port: 9090
	2025/12/10 06:25:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/10 06:24:40 Starting overwatch
	
	
	==> storage-provisioner [2f4a3c5b106dc9be9345ac2e196e0149c6a49b366f48b0ae9bcc66efb6381bd7] <==
	I1210 06:25:01.160373       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1210 06:25:01.169379       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1210 06:25:01.169427       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1210 06:25:01.172026       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:25:04.627265       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:25:08.888323       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:25:12.487290       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:25:15.540972       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:25:18.564441       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:25:18.571834       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1210 06:25:18.572026       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1210 06:25:18.572170       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fcbf147d-e027-4c81-b883-f30651ab340b", APIVersion:"v1", ResourceVersion:"669", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-133470_d68b3d64-08c0-48f5-851a-9d7a3377f3d6 became leader
	I1210 06:25:18.572318       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-133470_d68b3d64-08c0-48f5-851a-9d7a3377f3d6!
	W1210 06:25:18.575910       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:25:18.582650       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1210 06:25:18.672943       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-133470_d68b3d64-08c0-48f5-851a-9d7a3377f3d6!
	W1210 06:25:20.587275       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:25:20.593237       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:25:22.599304       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:25:22.604678       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:25:24.608251       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:25:24.614550       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [7ed6660ccf81b6a4976447ae69ba63d0e45dd08b146be33d81085a872b17b10e] <==
	I1210 06:24:30.823311       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1210 06:25:00.829370       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-133470 -n embed-certs-133470
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-133470 -n embed-certs-133470: exit status 2 (356.726796ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-133470 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (5.91s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-126107 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-126107 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (260.004959ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:25:25Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-126107 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-126107
helpers_test.go:244: (dbg) docker inspect newest-cni-126107:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fd722e851bba0978618dcfb48e2cdc6ab631c49bfe6d429eae657de39ab08647",
	        "Created": "2025-12-10T06:25:04.189215995Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 337624,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T06:25:04.236700846Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9dfcc37acf4d8ed51daae49d651516447e95ced4bb0b0783e8c53cb79a74f008",
	        "ResolvConfPath": "/var/lib/docker/containers/fd722e851bba0978618dcfb48e2cdc6ab631c49bfe6d429eae657de39ab08647/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fd722e851bba0978618dcfb48e2cdc6ab631c49bfe6d429eae657de39ab08647/hostname",
	        "HostsPath": "/var/lib/docker/containers/fd722e851bba0978618dcfb48e2cdc6ab631c49bfe6d429eae657de39ab08647/hosts",
	        "LogPath": "/var/lib/docker/containers/fd722e851bba0978618dcfb48e2cdc6ab631c49bfe6d429eae657de39ab08647/fd722e851bba0978618dcfb48e2cdc6ab631c49bfe6d429eae657de39ab08647-json.log",
	        "Name": "/newest-cni-126107",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "newest-cni-126107:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-126107",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fd722e851bba0978618dcfb48e2cdc6ab631c49bfe6d429eae657de39ab08647",
	                "LowerDir": "/var/lib/docker/overlay2/38e82e185bdd87c0340e37cb6e3e8e9f3f15eb550f0a30b8c8f391422bf5066f-init/diff:/var/lib/docker/overlay2/5745aee6e8b05b3a4cc4ad6aee891df9d6438d830895f70bd2a764a976802708/diff",
	                "MergedDir": "/var/lib/docker/overlay2/38e82e185bdd87c0340e37cb6e3e8e9f3f15eb550f0a30b8c8f391422bf5066f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/38e82e185bdd87c0340e37cb6e3e8e9f3f15eb550f0a30b8c8f391422bf5066f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/38e82e185bdd87c0340e37cb6e3e8e9f3f15eb550f0a30b8c8f391422bf5066f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-126107",
	                "Source": "/var/lib/docker/volumes/newest-cni-126107/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-126107",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-126107",
	                "name.minikube.sigs.k8s.io": "newest-cni-126107",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "9ca3a6c565dcc4fb1eba2ae3171a20744a3d3bedb703f2052c92f6f9ae85be52",
	            "SandboxKey": "/var/run/docker/netns/9ca3a6c565dc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-126107": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fb43db5713696641e964d1432fde86d3443ec48d700f0cf8b03518e1f4ba75f2",
	                    "EndpointID": "f7589ef1701ccf19b8b1d86565db2aaf3dc3ee53d7153fb473fdb88499131a65",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "f6:5e:3c:7a:e1:48",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-126107",
	                        "fd722e851bba"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-126107 -n newest-cni-126107
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-126107 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-126107 logs -n 25: (1.031893887s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ start   │ -p old-k8s-version-424086 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-424086       │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:24 UTC │
	│ addons  │ enable metrics-server -p no-preload-713838 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-713838            │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-133470 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-133470           │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │                     │
	│ stop    │ -p no-preload-713838 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-713838            │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:24 UTC │
	│ stop    │ -p embed-certs-133470 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-133470           │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:24 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-643991 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-643991 │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-643991 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-643991 │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:24 UTC │
	│ addons  │ enable dashboard -p no-preload-713838 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-713838            │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:24 UTC │
	│ start   │ -p no-preload-713838 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-713838            │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:25 UTC │
	│ addons  │ enable dashboard -p embed-certs-133470 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-133470           │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:24 UTC │
	│ start   │ -p embed-certs-133470 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-133470           │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:25 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-643991 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-643991 │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:24 UTC │
	│ start   │ -p default-k8s-diff-port-643991 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-643991 │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:25 UTC │
	│ image   │ old-k8s-version-424086 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-424086       │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:24 UTC │
	│ pause   │ -p old-k8s-version-424086 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-424086       │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │                     │
	│ delete  │ -p old-k8s-version-424086                                                                                                                                                                                                                            │ old-k8s-version-424086       │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:24 UTC │
	│ delete  │ -p old-k8s-version-424086                                                                                                                                                                                                                            │ old-k8s-version-424086       │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:24 UTC │
	│ start   │ -p newest-cni-126107 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-126107            │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:25 UTC │
	│ image   │ no-preload-713838 image list --format=json                                                                                                                                                                                                           │ no-preload-713838            │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │ 10 Dec 25 06:25 UTC │
	│ pause   │ -p no-preload-713838 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-713838            │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │                     │
	│ delete  │ -p no-preload-713838                                                                                                                                                                                                                                 │ no-preload-713838            │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │ 10 Dec 25 06:25 UTC │
	│ image   │ embed-certs-133470 image list --format=json                                                                                                                                                                                                          │ embed-certs-133470           │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │ 10 Dec 25 06:25 UTC │
	│ pause   │ -p embed-certs-133470 --alsologtostderr -v=1                                                                                                                                                                                                         │ embed-certs-133470           │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │                     │
	│ delete  │ -p no-preload-713838                                                                                                                                                                                                                                 │ no-preload-713838            │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │ 10 Dec 25 06:25 UTC │
	│ addons  │ enable metrics-server -p newest-cni-126107 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-126107            │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:24:59
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:24:59.327087  336887 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:24:59.327365  336887 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:24:59.327375  336887 out.go:374] Setting ErrFile to fd 2...
	I1210 06:24:59.327379  336887 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:24:59.327669  336887 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8832/.minikube/bin
	I1210 06:24:59.328143  336887 out.go:368] Setting JSON to false
	I1210 06:24:59.329429  336887 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4050,"bootTime":1765343849,"procs":361,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 06:24:59.329519  336887 start.go:143] virtualization: kvm guest
	I1210 06:24:59.331611  336887 out.go:179] * [newest-cni-126107] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 06:24:59.333096  336887 notify.go:221] Checking for updates...
	I1210 06:24:59.333116  336887 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 06:24:59.334447  336887 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:24:59.336068  336887 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-8832/kubeconfig
	I1210 06:24:59.337494  336887 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-8832/.minikube
	I1210 06:24:59.338960  336887 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 06:24:59.340340  336887 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:24:59.342187  336887 config.go:182] Loaded profile config "default-k8s-diff-port-643991": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 06:24:59.342330  336887 config.go:182] Loaded profile config "embed-certs-133470": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 06:24:59.342492  336887 config.go:182] Loaded profile config "no-preload-713838": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 06:24:59.342623  336887 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:24:59.369242  336887 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 06:24:59.369328  336887 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:24:59.432140  336887 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-10 06:24:59.420604919 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:24:59.432256  336887 docker.go:319] overlay module found
	I1210 06:24:59.435201  336887 out.go:179] * Using the docker driver based on user configuration
	W1210 06:24:55.887075  331193 pod_ready.go:104] pod "coredns-66bc5c9577-znsz6" is not "Ready", error: <nil>
	W1210 06:24:58.386507  331193 pod_ready.go:104] pod "coredns-66bc5c9577-znsz6" is not "Ready", error: <nil>
	I1210 06:24:59.436402  336887 start.go:309] selected driver: docker
	I1210 06:24:59.436415  336887 start.go:927] validating driver "docker" against <nil>
	I1210 06:24:59.436427  336887 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:24:59.436998  336887 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:24:59.496347  336887 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-10 06:24:59.486011226 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:24:59.496517  336887 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1210 06:24:59.496554  336887 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1210 06:24:59.496758  336887 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1210 06:24:59.499173  336887 out.go:179] * Using Docker driver with root privileges
	I1210 06:24:59.500516  336887 cni.go:84] Creating CNI manager for ""
	I1210 06:24:59.500598  336887 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:24:59.500612  336887 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1210 06:24:59.500684  336887 start.go:353] cluster config:
	{Name:newest-cni-126107 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-126107 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:24:59.502093  336887 out.go:179] * Starting "newest-cni-126107" primary control-plane node in "newest-cni-126107" cluster
	I1210 06:24:59.503450  336887 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 06:24:59.504798  336887 out.go:179] * Pulling base image v0.0.48-1765319469-22089 ...
	I1210 06:24:59.506022  336887 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1210 06:24:59.506091  336887 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-8832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1210 06:24:59.506102  336887 cache.go:65] Caching tarball of preloaded images
	I1210 06:24:59.506114  336887 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon
	I1210 06:24:59.506191  336887 preload.go:238] Found /home/jenkins/minikube-integration/22089-8832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 06:24:59.506203  336887 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1210 06:24:59.506300  336887 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/config.json ...
	I1210 06:24:59.506323  336887 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/config.json: {Name:mkdf58f074b298e370024a6ce1eb0198fc1a1932 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:24:59.529599  336887 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon, skipping pull
	I1210 06:24:59.529619  336887 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca exists in daemon, skipping load
	I1210 06:24:59.529645  336887 cache.go:243] Successfully downloaded all kic artifacts
	I1210 06:24:59.529672  336887 start.go:360] acquireMachinesLock for newest-cni-126107: {Name:mk95835e60131d01841dcfa433d5776bf10a491c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:24:59.529766  336887 start.go:364] duration metric: took 78.432µs to acquireMachinesLock for "newest-cni-126107"
	I1210 06:24:59.529787  336887 start.go:93] Provisioning new machine with config: &{Name:newest-cni-126107 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-126107 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 06:24:59.529851  336887 start.go:125] createHost starting for "" (driver="docker")
	W1210 06:24:58.946860  326955 pod_ready.go:104] pod "coredns-7d764666f9-hr4gk" is not "Ready", error: <nil>
	I1210 06:25:00.446892  326955 pod_ready.go:94] pod "coredns-7d764666f9-hr4gk" is "Ready"
	I1210 06:25:00.446917  326955 pod_ready.go:86] duration metric: took 31.006503405s for pod "coredns-7d764666f9-hr4gk" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:00.449783  326955 pod_ready.go:83] waiting for pod "etcd-no-preload-713838" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:00.454644  326955 pod_ready.go:94] pod "etcd-no-preload-713838" is "Ready"
	I1210 06:25:00.454673  326955 pod_ready.go:86] duration metric: took 4.863318ms for pod "etcd-no-preload-713838" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:00.457203  326955 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-713838" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:00.462197  326955 pod_ready.go:94] pod "kube-apiserver-no-preload-713838" is "Ready"
	I1210 06:25:00.462227  326955 pod_ready.go:86] duration metric: took 4.996726ms for pod "kube-apiserver-no-preload-713838" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:00.464859  326955 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-713838" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:00.643687  326955 pod_ready.go:94] pod "kube-controller-manager-no-preload-713838" is "Ready"
	I1210 06:25:00.643711  326955 pod_ready.go:86] duration metric: took 178.834657ms for pod "kube-controller-manager-no-preload-713838" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:00.844018  326955 pod_ready.go:83] waiting for pod "kube-proxy-c62hk" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:01.244075  326955 pod_ready.go:94] pod "kube-proxy-c62hk" is "Ready"
	I1210 06:25:01.244105  326955 pod_ready.go:86] duration metric: took 400.060427ms for pod "kube-proxy-c62hk" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:01.445041  326955 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-713838" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:01.843827  326955 pod_ready.go:94] pod "kube-scheduler-no-preload-713838" is "Ready"
	I1210 06:25:01.843854  326955 pod_ready.go:86] duration metric: took 398.788804ms for pod "kube-scheduler-no-preload-713838" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:01.843867  326955 pod_ready.go:40] duration metric: took 32.407570406s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 06:25:01.891782  326955 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1210 06:25:01.897299  326955 out.go:179] * Done! kubectl is now configured to use "no-preload-713838" cluster and "default" namespace by default
	W1210 06:25:00.080872  327833 pod_ready.go:104] pod "coredns-66bc5c9577-gw75x" is not "Ready", error: <nil>
	W1210 06:25:02.579615  327833 pod_ready.go:104] pod "coredns-66bc5c9577-gw75x" is not "Ready", error: <nil>
	I1210 06:24:59.532875  336887 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1210 06:24:59.533186  336887 start.go:159] libmachine.API.Create for "newest-cni-126107" (driver="docker")
	I1210 06:24:59.533225  336887 client.go:173] LocalClient.Create starting
	I1210 06:24:59.533327  336887 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem
	I1210 06:24:59.533388  336887 main.go:143] libmachine: Decoding PEM data...
	I1210 06:24:59.533416  336887 main.go:143] libmachine: Parsing certificate...
	I1210 06:24:59.533500  336887 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22089-8832/.minikube/certs/cert.pem
	I1210 06:24:59.533540  336887 main.go:143] libmachine: Decoding PEM data...
	I1210 06:24:59.533557  336887 main.go:143] libmachine: Parsing certificate...
	I1210 06:24:59.533982  336887 cli_runner.go:164] Run: docker network inspect newest-cni-126107 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 06:24:59.552885  336887 cli_runner.go:211] docker network inspect newest-cni-126107 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 06:24:59.552988  336887 network_create.go:284] running [docker network inspect newest-cni-126107] to gather additional debugging logs...
	I1210 06:24:59.553008  336887 cli_runner.go:164] Run: docker network inspect newest-cni-126107
	W1210 06:24:59.572451  336887 cli_runner.go:211] docker network inspect newest-cni-126107 returned with exit code 1
	I1210 06:24:59.572534  336887 network_create.go:287] error running [docker network inspect newest-cni-126107]: docker network inspect newest-cni-126107: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-126107 not found
	I1210 06:24:59.572551  336887 network_create.go:289] output of [docker network inspect newest-cni-126107]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-126107 not found
	
	** /stderr **
	I1210 06:24:59.572710  336887 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:24:59.592775  336887 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-93569dd44e03 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:22:34:6b:89:a0:37} reservation:<nil>}
	I1210 06:24:59.593342  336887 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-2fbfa5ca31a8 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:6a:30:9e:0a:da:73} reservation:<nil>}
	I1210 06:24:59.594133  336887 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-68b4fc4b224b IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:0a:d7:21:69:83} reservation:<nil>}
	I1210 06:24:59.594915  336887 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-0a24a8ad90ff IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:32:ea:e5:16:4c:6f} reservation:<nil>}
	I1210 06:24:59.595927  336887 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001dd18e0}
	I1210 06:24:59.595955  336887 network_create.go:124] attempt to create docker network newest-cni-126107 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1210 06:24:59.596007  336887 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-126107 newest-cni-126107
	I1210 06:24:59.648242  336887 network_create.go:108] docker network newest-cni-126107 192.168.85.0/24 created
	I1210 06:24:59.648276  336887 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-126107" container
	I1210 06:24:59.648334  336887 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 06:24:59.667592  336887 cli_runner.go:164] Run: docker volume create newest-cni-126107 --label name.minikube.sigs.k8s.io=newest-cni-126107 --label created_by.minikube.sigs.k8s.io=true
	I1210 06:24:59.686982  336887 oci.go:103] Successfully created a docker volume newest-cni-126107
	I1210 06:24:59.687084  336887 cli_runner.go:164] Run: docker run --rm --name newest-cni-126107-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-126107 --entrypoint /usr/bin/test -v newest-cni-126107:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca -d /var/lib
	I1210 06:25:00.115171  336887 oci.go:107] Successfully prepared a docker volume newest-cni-126107
	I1210 06:25:00.115245  336887 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1210 06:25:00.115259  336887 kic.go:194] Starting extracting preloaded images to volume ...
	I1210 06:25:00.115360  336887 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22089-8832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-126107:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca -I lz4 -xf /preloaded.tar -C /extractDir
	I1210 06:25:04.112675  336887 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22089-8832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-126107:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca -I lz4 -xf /preloaded.tar -C /extractDir: (3.997248616s)
	I1210 06:25:04.112712  336887 kic.go:203] duration metric: took 3.997449096s to extract preloaded images to volume ...
	W1210 06:25:04.112837  336887 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1210 06:25:04.112877  336887 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1210 06:25:04.112928  336887 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 06:25:04.172016  336887 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-126107 --name newest-cni-126107 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-126107 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-126107 --network newest-cni-126107 --ip 192.168.85.2 --volume newest-cni-126107:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca
	W1210 06:25:00.387118  331193 pod_ready.go:104] pod "coredns-66bc5c9577-znsz6" is not "Ready", error: <nil>
	W1210 06:25:02.917573  331193 pod_ready.go:104] pod "coredns-66bc5c9577-znsz6" is not "Ready", error: <nil>
	W1210 06:25:04.579873  327833 pod_ready.go:104] pod "coredns-66bc5c9577-gw75x" is not "Ready", error: <nil>
	W1210 06:25:06.580394  327833 pod_ready.go:104] pod "coredns-66bc5c9577-gw75x" is not "Ready", error: <nil>
	I1210 06:25:07.580576  327833 pod_ready.go:94] pod "coredns-66bc5c9577-gw75x" is "Ready"
	I1210 06:25:07.580605  327833 pod_ready.go:86] duration metric: took 37.506619554s for pod "coredns-66bc5c9577-gw75x" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:07.583509  327833 pod_ready.go:83] waiting for pod "etcd-embed-certs-133470" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:07.587865  327833 pod_ready.go:94] pod "etcd-embed-certs-133470" is "Ready"
	I1210 06:25:07.587890  327833 pod_ready.go:86] duration metric: took 4.359471ms for pod "etcd-embed-certs-133470" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:07.590170  327833 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-133470" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:07.594746  327833 pod_ready.go:94] pod "kube-apiserver-embed-certs-133470" is "Ready"
	I1210 06:25:07.594774  327833 pod_ready.go:86] duration metric: took 4.57905ms for pod "kube-apiserver-embed-certs-133470" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:07.596975  327833 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-133470" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:07.778320  327833 pod_ready.go:94] pod "kube-controller-manager-embed-certs-133470" is "Ready"
	I1210 06:25:07.778347  327833 pod_ready.go:86] duration metric: took 181.346408ms for pod "kube-controller-manager-embed-certs-133470" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:07.979006  327833 pod_ready.go:83] waiting for pod "kube-proxy-fkdk9" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:08.378607  327833 pod_ready.go:94] pod "kube-proxy-fkdk9" is "Ready"
	I1210 06:25:08.378631  327833 pod_ready.go:86] duration metric: took 399.601345ms for pod "kube-proxy-fkdk9" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:08.578014  327833 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-133470" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:08.978761  327833 pod_ready.go:94] pod "kube-scheduler-embed-certs-133470" is "Ready"
	I1210 06:25:08.978787  327833 pod_ready.go:86] duration metric: took 400.749384ms for pod "kube-scheduler-embed-certs-133470" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:08.978798  327833 pod_ready.go:40] duration metric: took 38.909473428s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 06:25:09.028286  327833 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1210 06:25:09.030218  327833 out.go:179] * Done! kubectl is now configured to use "embed-certs-133470" cluster and "default" namespace by default
	I1210 06:25:04.481386  336887 cli_runner.go:164] Run: docker container inspect newest-cni-126107 --format={{.State.Running}}
	I1210 06:25:04.502244  336887 cli_runner.go:164] Run: docker container inspect newest-cni-126107 --format={{.State.Status}}
	I1210 06:25:04.522735  336887 cli_runner.go:164] Run: docker exec newest-cni-126107 stat /var/lib/dpkg/alternatives/iptables
	I1210 06:25:04.571010  336887 oci.go:144] the created container "newest-cni-126107" has a running status.
	I1210 06:25:04.571044  336887 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22089-8832/.minikube/machines/newest-cni-126107/id_rsa...
	I1210 06:25:04.663409  336887 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22089-8832/.minikube/machines/newest-cni-126107/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 06:25:04.690550  336887 cli_runner.go:164] Run: docker container inspect newest-cni-126107 --format={{.State.Status}}
	I1210 06:25:04.713575  336887 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 06:25:04.713604  336887 kic_runner.go:114] Args: [docker exec --privileged newest-cni-126107 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 06:25:04.767064  336887 cli_runner.go:164] Run: docker container inspect newest-cni-126107 --format={{.State.Status}}
	I1210 06:25:04.791773  336887 machine.go:94] provisionDockerMachine start ...
	I1210 06:25:04.791873  336887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:04.819325  336887 main.go:143] libmachine: Using SSH client type: native
	I1210 06:25:04.819813  336887 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1210 06:25:04.819834  336887 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 06:25:04.820667  336887 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1210 06:25:07.958166  336887 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-126107
	
	I1210 06:25:07.958195  336887 ubuntu.go:182] provisioning hostname "newest-cni-126107"
	I1210 06:25:07.958260  336887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:07.980501  336887 main.go:143] libmachine: Using SSH client type: native
	I1210 06:25:07.980710  336887 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1210 06:25:07.980728  336887 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-126107 && echo "newest-cni-126107" | sudo tee /etc/hostname
	I1210 06:25:08.127040  336887 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-126107
	
	I1210 06:25:08.127128  336887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:08.147687  336887 main.go:143] libmachine: Using SSH client type: native
	I1210 06:25:08.147963  336887 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1210 06:25:08.147982  336887 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-126107' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-126107/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-126107' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 06:25:08.283513  336887 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:25:08.283545  336887 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22089-8832/.minikube CaCertPath:/home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22089-8832/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22089-8832/.minikube}
	I1210 06:25:08.283569  336887 ubuntu.go:190] setting up certificates
	I1210 06:25:08.283582  336887 provision.go:84] configureAuth start
	I1210 06:25:08.283641  336887 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-126107
	I1210 06:25:08.304777  336887 provision.go:143] copyHostCerts
	I1210 06:25:08.304859  336887 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-8832/.minikube/ca.pem, removing ...
	I1210 06:25:08.304870  336887 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-8832/.minikube/ca.pem
	I1210 06:25:08.304943  336887 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22089-8832/.minikube/ca.pem (1078 bytes)
	I1210 06:25:08.305028  336887 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-8832/.minikube/cert.pem, removing ...
	I1210 06:25:08.305036  336887 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-8832/.minikube/cert.pem
	I1210 06:25:08.305061  336887 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22089-8832/.minikube/cert.pem (1123 bytes)
	I1210 06:25:08.305130  336887 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-8832/.minikube/key.pem, removing ...
	I1210 06:25:08.305138  336887 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-8832/.minikube/key.pem
	I1210 06:25:08.305161  336887 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22089-8832/.minikube/key.pem (1675 bytes)
	I1210 06:25:08.305231  336887 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22089-8832/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca-key.pem org=jenkins.newest-cni-126107 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-126107]
	I1210 06:25:08.358046  336887 provision.go:177] copyRemoteCerts
	I1210 06:25:08.358115  336887 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 06:25:08.358153  336887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:08.378428  336887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/newest-cni-126107/id_rsa Username:docker}
	I1210 06:25:08.475365  336887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 06:25:08.497101  336887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 06:25:08.517033  336887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 06:25:08.536354  336887 provision.go:87] duration metric: took 252.752199ms to configureAuth
	I1210 06:25:08.536379  336887 ubuntu.go:206] setting minikube options for container-runtime
	I1210 06:25:08.536554  336887 config.go:182] Loaded profile config "newest-cni-126107": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 06:25:08.536656  336887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:08.556388  336887 main.go:143] libmachine: Using SSH client type: native
	I1210 06:25:08.556749  336887 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1210 06:25:08.556781  336887 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 06:25:08.835275  336887 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 06:25:08.835301  336887 machine.go:97] duration metric: took 4.043503325s to provisionDockerMachine
	I1210 06:25:08.835313  336887 client.go:176] duration metric: took 9.302078213s to LocalClient.Create
	I1210 06:25:08.835335  336887 start.go:167] duration metric: took 9.302149263s to libmachine.API.Create "newest-cni-126107"
	I1210 06:25:08.835345  336887 start.go:293] postStartSetup for "newest-cni-126107" (driver="docker")
	I1210 06:25:08.835361  336887 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 06:25:08.835432  336887 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 06:25:08.835497  336887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:08.855854  336887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/newest-cni-126107/id_rsa Username:docker}
	I1210 06:25:08.956961  336887 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 06:25:08.961167  336887 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 06:25:08.961201  336887 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 06:25:08.961213  336887 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-8832/.minikube/addons for local assets ...
	I1210 06:25:08.961271  336887 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-8832/.minikube/files for local assets ...
	I1210 06:25:08.961344  336887 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-8832/.minikube/files/etc/ssl/certs/123742.pem -> 123742.pem in /etc/ssl/certs
	I1210 06:25:08.961433  336887 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 06:25:08.970695  336887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/files/etc/ssl/certs/123742.pem --> /etc/ssl/certs/123742.pem (1708 bytes)
	I1210 06:25:08.995442  336887 start.go:296] duration metric: took 160.082878ms for postStartSetup
	I1210 06:25:08.995880  336887 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-126107
	I1210 06:25:09.016559  336887 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/config.json ...
	I1210 06:25:09.016908  336887 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:25:09.016964  336887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:09.038838  336887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/newest-cni-126107/id_rsa Username:docker}
	I1210 06:25:09.139907  336887 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 06:25:09.145902  336887 start.go:128] duration metric: took 9.616033039s to createHost
	I1210 06:25:09.145930  336887 start.go:83] releasing machines lock for "newest-cni-126107", held for 9.616152275s
	I1210 06:25:09.146007  336887 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-126107
	I1210 06:25:09.166587  336887 ssh_runner.go:195] Run: cat /version.json
	I1210 06:25:09.166650  336887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:09.166669  336887 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 06:25:09.166759  336887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:09.189521  336887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/newest-cni-126107/id_rsa Username:docker}
	I1210 06:25:09.189525  336887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/newest-cni-126107/id_rsa Username:docker}
	I1210 06:25:09.284007  336887 ssh_runner.go:195] Run: systemctl --version
	W1210 06:25:05.386403  331193 pod_ready.go:104] pod "coredns-66bc5c9577-znsz6" is not "Ready", error: <nil>
	W1210 06:25:07.387202  331193 pod_ready.go:104] pod "coredns-66bc5c9577-znsz6" is not "Ready", error: <nil>
	W1210 06:25:09.387389  331193 pod_ready.go:104] pod "coredns-66bc5c9577-znsz6" is not "Ready", error: <nil>
	I1210 06:25:09.351948  336887 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 06:25:09.392017  336887 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 06:25:09.397100  336887 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 06:25:09.397159  336887 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 06:25:09.426437  336887 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 06:25:09.426486  336887 start.go:496] detecting cgroup driver to use...
	I1210 06:25:09.426524  336887 detect.go:190] detected "systemd" cgroup driver on host os
	I1210 06:25:09.426570  336887 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 06:25:09.444100  336887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 06:25:09.457503  336887 docker.go:218] disabling cri-docker service (if available) ...
	I1210 06:25:09.457569  336887 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 06:25:09.475303  336887 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 06:25:09.495265  336887 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 06:25:09.584209  336887 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 06:25:09.673201  336887 docker.go:234] disabling docker service ...
	I1210 06:25:09.673262  336887 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 06:25:09.692964  336887 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 06:25:09.706562  336887 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 06:25:09.794361  336887 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 06:25:09.886009  336887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 06:25:09.899964  336887 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:25:09.915638  336887 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 06:25:09.915690  336887 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:25:09.927534  336887 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1210 06:25:09.927591  336887 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:25:09.937774  336887 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:25:09.947722  336887 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:25:09.957780  336887 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 06:25:09.967038  336887 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:25:09.977926  336887 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:25:09.993658  336887 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:25:10.003638  336887 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 06:25:10.012100  336887 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 06:25:10.021305  336887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:25:10.110274  336887 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 06:25:10.246619  336887 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 06:25:10.246690  336887 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 06:25:10.251096  336887 start.go:564] Will wait 60s for crictl version
	I1210 06:25:10.251165  336887 ssh_runner.go:195] Run: which crictl
	I1210 06:25:10.255306  336887 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 06:25:10.283066  336887 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 06:25:10.283157  336887 ssh_runner.go:195] Run: crio --version
	I1210 06:25:10.313027  336887 ssh_runner.go:195] Run: crio --version
	I1210 06:25:10.346493  336887 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1210 06:25:10.348155  336887 cli_runner.go:164] Run: docker network inspect newest-cni-126107 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:25:10.367398  336887 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1210 06:25:10.371843  336887 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:25:10.385684  336887 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1210 06:25:10.387117  336887 kubeadm.go:884] updating cluster {Name:newest-cni-126107 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-126107 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 06:25:10.387245  336887 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1210 06:25:10.387300  336887 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:25:10.421783  336887 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 06:25:10.421805  336887 crio.go:433] Images already preloaded, skipping extraction
	I1210 06:25:10.421852  336887 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:25:10.448367  336887 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 06:25:10.448389  336887 cache_images.go:86] Images are preloaded, skipping loading
	I1210 06:25:10.448395  336887 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 crio true true} ...
	I1210 06:25:10.448494  336887 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-126107 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-126107 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 06:25:10.448573  336887 ssh_runner.go:195] Run: crio config
	I1210 06:25:10.498037  336887 cni.go:84] Creating CNI manager for ""
	I1210 06:25:10.498063  336887 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:25:10.498081  336887 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1210 06:25:10.498120  336887 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-126107 NodeName:newest-cni-126107 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 06:25:10.498246  336887 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-126107"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 06:25:10.498306  336887 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1210 06:25:10.507229  336887 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 06:25:10.507302  336887 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 06:25:10.516385  336887 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1210 06:25:10.530854  336887 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1210 06:25:10.548260  336887 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1210 06:25:10.563281  336887 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1210 06:25:10.567436  336887 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:25:10.578747  336887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:25:10.660880  336887 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:25:10.688248  336887 certs.go:69] Setting up /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107 for IP: 192.168.85.2
	I1210 06:25:10.688268  336887 certs.go:195] generating shared ca certs ...
	I1210 06:25:10.688286  336887 certs.go:227] acquiring lock for ca certs: {Name:mkfe434cecfa5233603e8d01fb39a21abb4f8ca4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:25:10.688431  336887 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22089-8832/.minikube/ca.key
	I1210 06:25:10.688526  336887 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22089-8832/.minikube/proxy-client-ca.key
	I1210 06:25:10.688544  336887 certs.go:257] generating profile certs ...
	I1210 06:25:10.688612  336887 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/client.key
	I1210 06:25:10.688636  336887 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/client.crt with IP's: []
	I1210 06:25:10.813463  336887 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/client.crt ...
	I1210 06:25:10.813530  336887 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/client.crt: {Name:mk7009f3bf80c2397e5ae6cdebdca2735a7f7b63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:25:10.813756  336887 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/client.key ...
	I1210 06:25:10.813772  336887 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/client.key: {Name:mk6d255207a819b82a749c48b0009054007ff91d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:25:10.813864  336887 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/apiserver.key.23b909bf
	I1210 06:25:10.813882  336887 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/apiserver.crt.23b909bf with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1210 06:25:11.022417  336887 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/apiserver.crt.23b909bf ...
	I1210 06:25:11.022443  336887 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/apiserver.crt.23b909bf: {Name:mk09a2e21f902ac4eed926780c1f90cb426b5a2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:25:11.022619  336887 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/apiserver.key.23b909bf ...
	I1210 06:25:11.022632  336887 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/apiserver.key.23b909bf: {Name:mkc73ed6c35fb6a21244daf518e5b2d0a7440a3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:25:11.022704  336887 certs.go:382] copying /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/apiserver.crt.23b909bf -> /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/apiserver.crt
	I1210 06:25:11.022778  336887 certs.go:386] copying /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/apiserver.key.23b909bf -> /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/apiserver.key
	I1210 06:25:11.022831  336887 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/proxy-client.key
	I1210 06:25:11.022848  336887 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/proxy-client.crt with IP's: []
	I1210 06:25:11.088507  336887 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/proxy-client.crt ...
	I1210 06:25:11.088534  336887 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/proxy-client.crt: {Name:mkdd3c9abbfeb78fdbbafdaf53f324a4a2e625ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:25:11.088686  336887 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/proxy-client.key ...
	I1210 06:25:11.088699  336887 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/proxy-client.key: {Name:mkd22ad5ae4429236c87cce8641338a9393df47a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:25:11.088869  336887 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/12374.pem (1338 bytes)
	W1210 06:25:11.088906  336887 certs.go:480] ignoring /home/jenkins/minikube-integration/22089-8832/.minikube/certs/12374_empty.pem, impossibly tiny 0 bytes
	I1210 06:25:11.088917  336887 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca-key.pem (1679 bytes)
	I1210 06:25:11.088939  336887 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem (1078 bytes)
	I1210 06:25:11.088963  336887 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/cert.pem (1123 bytes)
	I1210 06:25:11.088988  336887 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/key.pem (1675 bytes)
	I1210 06:25:11.089034  336887 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/files/etc/ssl/certs/123742.pem (1708 bytes)
	I1210 06:25:11.089621  336887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 06:25:11.108552  336887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 06:25:11.127416  336887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 06:25:11.146079  336887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 06:25:11.164732  336887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 06:25:11.183864  336887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 06:25:11.202457  336887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 06:25:11.221380  336887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 06:25:11.241165  336887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/files/etc/ssl/certs/123742.pem --> /usr/share/ca-certificates/123742.pem (1708 bytes)
	I1210 06:25:11.262201  336887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 06:25:11.282304  336887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/certs/12374.pem --> /usr/share/ca-certificates/12374.pem (1338 bytes)
	I1210 06:25:11.302104  336887 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 06:25:11.316208  336887 ssh_runner.go:195] Run: openssl version
	I1210 06:25:11.323011  336887 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12374.pem
	I1210 06:25:11.331150  336887 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12374.pem /etc/ssl/certs/12374.pem
	I1210 06:25:11.339353  336887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12374.pem
	I1210 06:25:11.343453  336887 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:52 /usr/share/ca-certificates/12374.pem
	I1210 06:25:11.343539  336887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12374.pem
	I1210 06:25:11.378191  336887 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 06:25:11.387532  336887 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/12374.pem /etc/ssl/certs/51391683.0
	I1210 06:25:11.395709  336887 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/123742.pem
	I1210 06:25:11.403915  336887 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/123742.pem /etc/ssl/certs/123742.pem
	I1210 06:25:11.413083  336887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/123742.pem
	I1210 06:25:11.417256  336887 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:52 /usr/share/ca-certificates/123742.pem
	I1210 06:25:11.417315  336887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/123742.pem
	I1210 06:25:11.452744  336887 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 06:25:11.460975  336887 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/123742.pem /etc/ssl/certs/3ec20f2e.0
	I1210 06:25:11.468848  336887 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:25:11.477072  336887 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 06:25:11.485572  336887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:25:11.490083  336887 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:25:11.490144  336887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:25:11.529873  336887 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 06:25:11.538675  336887 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 06:25:11.547942  336887 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:25:11.552437  336887 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 06:25:11.552529  336887 kubeadm.go:401] StartCluster: {Name:newest-cni-126107 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-126107 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:25:11.552617  336887 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 06:25:11.552673  336887 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:25:11.582819  336887 cri.go:89] found id: ""
	I1210 06:25:11.582893  336887 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 06:25:11.591576  336887 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 06:25:11.600085  336887 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 06:25:11.600143  336887 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 06:25:11.608700  336887 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 06:25:11.608723  336887 kubeadm.go:158] found existing configuration files:
	
	I1210 06:25:11.608773  336887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 06:25:11.617207  336887 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 06:25:11.617265  336887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 06:25:11.625691  336887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 06:25:11.634058  336887 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 06:25:11.634138  336887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 06:25:11.642174  336887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 06:25:11.650696  336887 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 06:25:11.650751  336887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 06:25:11.658854  336887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 06:25:11.667261  336887 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 06:25:11.667309  336887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 06:25:11.675445  336887 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 06:25:11.717793  336887 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1210 06:25:11.717857  336887 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 06:25:11.787773  336887 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 06:25:11.787862  336887 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1210 06:25:11.787918  336887 kubeadm.go:319] OS: Linux
	I1210 06:25:11.788013  336887 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 06:25:11.788088  336887 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 06:25:11.788209  336887 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 06:25:11.788287  336887 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 06:25:11.788329  336887 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 06:25:11.788400  336887 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 06:25:11.788501  336887 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 06:25:11.788573  336887 kubeadm.go:319] CGROUPS_IO: enabled
	I1210 06:25:11.851680  336887 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 06:25:11.851818  336887 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 06:25:11.851989  336887 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 06:25:11.859860  336887 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 06:25:11.863122  336887 out.go:252]   - Generating certificates and keys ...
	I1210 06:25:11.863226  336887 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 06:25:11.863328  336887 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 06:25:11.994891  336887 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 06:25:12.216319  336887 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 06:25:12.263074  336887 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 06:25:12.317348  336887 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 06:25:12.348525  336887 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 06:25:12.348673  336887 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-126107] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1210 06:25:12.453542  336887 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 06:25:12.453734  336887 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-126107] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1210 06:25:12.554979  336887 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 06:25:12.639691  336887 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 06:25:12.675769  336887 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 06:25:12.675887  336887 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 06:25:12.733954  336887 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 06:25:12.762974  336887 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 06:25:12.895579  336887 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 06:25:12.968568  336887 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 06:25:13.242877  336887 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 06:25:13.243493  336887 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 06:25:13.247727  336887 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 06:25:13.249454  336887 out.go:252]   - Booting up control plane ...
	I1210 06:25:13.249584  336887 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 06:25:13.249689  336887 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 06:25:13.249772  336887 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 06:25:13.266130  336887 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 06:25:13.266243  336887 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 06:25:13.273740  336887 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 06:25:13.274070  336887 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 06:25:13.274119  336887 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 06:25:13.387904  336887 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 06:25:13.388113  336887 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 06:25:13.888860  336887 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 500.995328ms
	I1210 06:25:13.892049  336887 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1210 06:25:13.892166  336887 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1210 06:25:13.892313  336887 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1210 06:25:13.892419  336887 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1210 06:25:11.887626  331193 pod_ready.go:104] pod "coredns-66bc5c9577-znsz6" is not "Ready", error: <nil>
	W1210 06:25:14.389916  331193 pod_ready.go:104] pod "coredns-66bc5c9577-znsz6" is not "Ready", error: <nil>
	I1210 06:25:14.896145  336887 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.004021858s
	I1210 06:25:16.123662  336887 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.231275136s
	I1210 06:25:17.894620  336887 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.00240365s
	I1210 06:25:17.919519  336887 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 06:25:17.933110  336887 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 06:25:17.946133  336887 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 06:25:17.946406  336887 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-126107 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 06:25:17.956662  336887 kubeadm.go:319] [bootstrap-token] Using token: x794l4.dwxrqyazh7co8i2b
	I1210 06:25:17.958956  336887 out.go:252]   - Configuring RBAC rules ...
	I1210 06:25:17.959110  336887 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 06:25:17.962931  336887 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 06:25:17.970206  336887 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 06:25:17.974857  336887 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 06:25:17.978201  336887 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 06:25:17.981820  336887 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 06:25:18.305622  336887 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 06:25:18.724389  336887 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1210 06:25:19.303999  336887 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1210 06:25:19.305073  336887 kubeadm.go:319] 
	I1210 06:25:19.305166  336887 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1210 06:25:19.305178  336887 kubeadm.go:319] 
	I1210 06:25:19.305276  336887 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1210 06:25:19.305284  336887 kubeadm.go:319] 
	I1210 06:25:19.305325  336887 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1210 06:25:19.305407  336887 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 06:25:19.305518  336887 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 06:25:19.305539  336887 kubeadm.go:319] 
	I1210 06:25:19.305612  336887 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1210 06:25:19.305622  336887 kubeadm.go:319] 
	I1210 06:25:19.305692  336887 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 06:25:19.305701  336887 kubeadm.go:319] 
	I1210 06:25:19.305779  336887 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1210 06:25:19.305879  336887 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 06:25:19.305980  336887 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 06:25:19.305989  336887 kubeadm.go:319] 
	I1210 06:25:19.306147  336887 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 06:25:19.306259  336887 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1210 06:25:19.306268  336887 kubeadm.go:319] 
	I1210 06:25:19.306392  336887 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token x794l4.dwxrqyazh7co8i2b \
	I1210 06:25:19.306553  336887 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:63e262019a0228173b835d7feaf739daf8c2f986042fc20415163ebad5fe89a5 \
	I1210 06:25:19.306586  336887 kubeadm.go:319] 	--control-plane 
	I1210 06:25:19.306595  336887 kubeadm.go:319] 
	I1210 06:25:19.306723  336887 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1210 06:25:19.306738  336887 kubeadm.go:319] 
	I1210 06:25:19.306834  336887 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token x794l4.dwxrqyazh7co8i2b \
	I1210 06:25:19.306968  336887 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:63e262019a0228173b835d7feaf739daf8c2f986042fc20415163ebad5fe89a5 
	I1210 06:25:19.309760  336887 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1210 06:25:19.309893  336887 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 06:25:19.309921  336887 cni.go:84] Creating CNI manager for ""
	I1210 06:25:19.309935  336887 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:25:19.312593  336887 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1210 06:25:19.314078  336887 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1210 06:25:19.319527  336887 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl ...
	I1210 06:25:19.319547  336887 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	W1210 06:25:16.888133  331193 pod_ready.go:104] pod "coredns-66bc5c9577-znsz6" is not "Ready", error: <nil>
	W1210 06:25:19.387854  331193 pod_ready.go:104] pod "coredns-66bc5c9577-znsz6" is not "Ready", error: <nil>
	I1210 06:25:20.387613  331193 pod_ready.go:94] pod "coredns-66bc5c9577-znsz6" is "Ready"
	I1210 06:25:20.387649  331193 pod_ready.go:86] duration metric: took 37.506338739s for pod "coredns-66bc5c9577-znsz6" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:20.390589  331193 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-643991" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:20.394950  331193 pod_ready.go:94] pod "etcd-default-k8s-diff-port-643991" is "Ready"
	I1210 06:25:20.394970  331193 pod_ready.go:86] duration metric: took 4.358753ms for pod "etcd-default-k8s-diff-port-643991" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:20.397078  331193 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-643991" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:20.401552  331193 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-643991" is "Ready"
	I1210 06:25:20.401582  331193 pod_ready.go:86] duration metric: took 4.480286ms for pod "kube-apiserver-default-k8s-diff-port-643991" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:20.403436  331193 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-643991" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:20.586026  331193 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-643991" is "Ready"
	I1210 06:25:20.586066  331193 pod_ready.go:86] duration metric: took 182.609502ms for pod "kube-controller-manager-default-k8s-diff-port-643991" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:20.785406  331193 pod_ready.go:83] waiting for pod "kube-proxy-mkpzc" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:21.185282  331193 pod_ready.go:94] pod "kube-proxy-mkpzc" is "Ready"
	I1210 06:25:21.185312  331193 pod_ready.go:86] duration metric: took 399.878814ms for pod "kube-proxy-mkpzc" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:21.385632  331193 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-643991" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:21.785630  331193 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-643991" is "Ready"
	I1210 06:25:21.785657  331193 pod_ready.go:86] duration metric: took 399.99741ms for pod "kube-scheduler-default-k8s-diff-port-643991" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 06:25:21.785671  331193 pod_ready.go:40] duration metric: took 38.908172562s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 06:25:21.838180  331193 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1210 06:25:21.841707  331193 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-643991" cluster and "default" namespace by default
	I1210 06:25:19.335897  336887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1210 06:25:19.579059  336887 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 06:25:19.579223  336887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:25:19.579345  336887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-126107 minikube.k8s.io/updated_at=2025_12_10T06_25_19_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=edc6abd3c0573b88c7a02dc35aa0b985627fa3e9 minikube.k8s.io/name=newest-cni-126107 minikube.k8s.io/primary=true
	I1210 06:25:19.671172  336887 ops.go:34] apiserver oom_adj: -16
	I1210 06:25:19.671176  336887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:25:20.171713  336887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:25:20.671765  336887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:25:21.171646  336887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:25:21.672261  336887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:25:22.171664  336887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:25:22.671695  336887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:25:23.172215  336887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:25:23.672135  336887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:25:24.172113  336887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 06:25:24.262343  336887 kubeadm.go:1114] duration metric: took 4.683154391s to wait for elevateKubeSystemPrivileges
	I1210 06:25:24.262384  336887 kubeadm.go:403] duration metric: took 12.709859653s to StartCluster
	I1210 06:25:24.262405  336887 settings.go:142] acquiring lock: {Name:mkcfa52e2e09cf8266d26c2d1d1f162454a79515 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:25:24.262534  336887 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22089-8832/kubeconfig
	I1210 06:25:24.264079  336887 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/kubeconfig: {Name:mk2d0febd8c6a30a71f02d20e2057fd6d147cd76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:25:24.264340  336887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1210 06:25:24.264341  336887 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 06:25:24.264448  336887 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 06:25:24.264587  336887 config.go:182] Loaded profile config "newest-cni-126107": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 06:25:24.264612  336887 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-126107"
	I1210 06:25:24.264617  336887 addons.go:70] Setting default-storageclass=true in profile "newest-cni-126107"
	I1210 06:25:24.264637  336887 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-126107"
	I1210 06:25:24.264658  336887 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-126107"
	I1210 06:25:24.264677  336887 host.go:66] Checking if "newest-cni-126107" exists ...
	I1210 06:25:24.265032  336887 cli_runner.go:164] Run: docker container inspect newest-cni-126107 --format={{.State.Status}}
	I1210 06:25:24.265187  336887 cli_runner.go:164] Run: docker container inspect newest-cni-126107 --format={{.State.Status}}
	I1210 06:25:24.265961  336887 out.go:179] * Verifying Kubernetes components...
	I1210 06:25:24.267358  336887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:25:24.295239  336887 addons.go:239] Setting addon default-storageclass=true in "newest-cni-126107"
	I1210 06:25:24.295286  336887 host.go:66] Checking if "newest-cni-126107" exists ...
	I1210 06:25:24.295784  336887 cli_runner.go:164] Run: docker container inspect newest-cni-126107 --format={{.State.Status}}
	I1210 06:25:24.299744  336887 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:25:24.301894  336887 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:25:24.301980  336887 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 06:25:24.302108  336887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:24.322876  336887 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 06:25:24.322986  336887 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 06:25:24.323121  336887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:24.336990  336887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/newest-cni-126107/id_rsa Username:docker}
	I1210 06:25:24.354804  336887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/newest-cni-126107/id_rsa Username:docker}
	I1210 06:25:24.367293  336887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1210 06:25:24.433694  336887 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:25:24.458686  336887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:25:24.476281  336887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:25:24.587395  336887 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1210 06:25:24.591341  336887 api_server.go:52] waiting for apiserver process to appear ...
	I1210 06:25:24.591877  336887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:25:24.817163  336887 api_server.go:72] duration metric: took 552.788691ms to wait for apiserver process to appear ...
	I1210 06:25:24.817199  336887 api_server.go:88] waiting for apiserver healthz status ...
	I1210 06:25:24.817218  336887 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1210 06:25:24.822887  336887 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1210 06:25:24.823795  336887 api_server.go:141] control plane version: v1.35.0-beta.0
	I1210 06:25:24.823817  336887 api_server.go:131] duration metric: took 6.611695ms to wait for apiserver health ...
	I1210 06:25:24.823827  336887 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 06:25:24.824951  336887 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1210 06:25:24.827376  336887 addons.go:530] duration metric: took 562.925018ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1210 06:25:24.827620  336887 system_pods.go:59] 8 kube-system pods found
	I1210 06:25:24.827654  336887 system_pods.go:61] "coredns-7d764666f9-rsznm" [0ac06f22-e09b-497c-ad77-f09e614de459] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1210 06:25:24.827669  336887 system_pods.go:61] "etcd-newest-cni-126107" [01d020b0-65ef-48ac-a7fc-abd86d760e8f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 06:25:24.827678  336887 system_pods.go:61] "kindnet-xj7td" [3cf83d19-8dae-4734-bdb5-0ce2410f4c99] Running
	I1210 06:25:24.827691  336887 system_pods.go:61] "kube-apiserver-newest-cni-126107" [984910c9-c993-4791-9830-55f3632d1af4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 06:25:24.827700  336887 system_pods.go:61] "kube-controller-manager-newest-cni-126107" [a811eae5-9f29-4614-9ab8-22c76a55f3b3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 06:25:24.827717  336887 system_pods.go:61] "kube-proxy-sxc9w" [7bc19225-90f1-4759-bb4f-bc2da959865d] Running
	I1210 06:25:24.827724  336887 system_pods.go:61] "kube-scheduler-newest-cni-126107" [689e6051-ab4a-4edc-be1d-b6aa4b77b3a4] Running
	I1210 06:25:24.827736  336887 system_pods.go:61] "storage-provisioner" [e274ee92-ba8d-446f-a4d8-dd2e9c49ca78] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1210 06:25:24.827744  336887 system_pods.go:74] duration metric: took 3.912087ms to wait for pod list to return data ...
	I1210 06:25:24.827757  336887 default_sa.go:34] waiting for default service account to be created ...
	I1210 06:25:24.830402  336887 default_sa.go:45] found service account: "default"
	I1210 06:25:24.830423  336887 default_sa.go:55] duration metric: took 2.660228ms for default service account to be created ...
	I1210 06:25:24.830442  336887 kubeadm.go:587] duration metric: took 566.065182ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1210 06:25:24.830456  336887 node_conditions.go:102] verifying NodePressure condition ...
	I1210 06:25:24.833268  336887 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1210 06:25:24.833310  336887 node_conditions.go:123] node cpu capacity is 8
	I1210 06:25:24.833326  336887 node_conditions.go:105] duration metric: took 2.865445ms to run NodePressure ...
	I1210 06:25:24.833340  336887 start.go:242] waiting for startup goroutines ...
	I1210 06:25:25.093217  336887 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-126107" context rescaled to 1 replicas
	I1210 06:25:25.093256  336887 start.go:247] waiting for cluster config update ...
	I1210 06:25:25.093273  336887 start.go:256] writing updated cluster config ...
	I1210 06:25:25.093688  336887 ssh_runner.go:195] Run: rm -f paused
	I1210 06:25:25.156514  336887 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1210 06:25:25.158355  336887 out.go:179] * Done! kubectl is now configured to use "newest-cni-126107" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 10 06:25:24 newest-cni-126107 crio[785]: time="2025-12-10T06:25:24.271870234Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:25:24 newest-cni-126107 crio[785]: time="2025-12-10T06:25:24.276378122Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=7564046d-38e7-4df4-92db-fbb987f97870 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 06:25:24 newest-cni-126107 crio[785]: time="2025-12-10T06:25:24.277385421Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=14655c00-5145-49f9-82fb-b8a21be735df name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 06:25:24 newest-cni-126107 crio[785]: time="2025-12-10T06:25:24.279277791Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 10 06:25:24 newest-cni-126107 crio[785]: time="2025-12-10T06:25:24.280205512Z" level=info msg="Ran pod sandbox 73f5b45410cb62e617d60f7fe6fdc208f732c733f8251096ed59455a0ea33f66 with infra container: kube-system/kindnet-xj7td/POD" id=7564046d-38e7-4df4-92db-fbb987f97870 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 06:25:24 newest-cni-126107 crio[785]: time="2025-12-10T06:25:24.281199836Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 10 06:25:24 newest-cni-126107 crio[785]: time="2025-12-10T06:25:24.282162842Z" level=info msg="Ran pod sandbox daa8730fa716c70565524260bd9db34485ce5bd17f097f4e87dc58e20f50a87e with infra container: kube-system/kube-proxy-sxc9w/POD" id=14655c00-5145-49f9-82fb-b8a21be735df name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 06:25:24 newest-cni-126107 crio[785]: time="2025-12-10T06:25:24.283685519Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=49fb30a5-fb23-4b14-b4ea-28718576675d name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:25:24 newest-cni-126107 crio[785]: time="2025-12-10T06:25:24.283984187Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=dc6d1dd7-ce0e-426d-9761-59f82db33c97 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:25:24 newest-cni-126107 crio[785]: time="2025-12-10T06:25:24.285186651Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=cab083d0-025e-4496-a510-2aefb3cff5f8 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:25:24 newest-cni-126107 crio[785]: time="2025-12-10T06:25:24.285226278Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=a22390c2-906a-420d-95bd-0560b4dd4ed8 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:25:24 newest-cni-126107 crio[785]: time="2025-12-10T06:25:24.290327957Z" level=info msg="Creating container: kube-system/kindnet-xj7td/kindnet-cni" id=ccab9237-f2ef-4acc-bd2c-03f96a05dc32 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:25:24 newest-cni-126107 crio[785]: time="2025-12-10T06:25:24.290457857Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:25:24 newest-cni-126107 crio[785]: time="2025-12-10T06:25:24.291367301Z" level=info msg="Creating container: kube-system/kube-proxy-sxc9w/kube-proxy" id=494eb6bb-673e-45d2-bdbf-a711151980e9 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:25:24 newest-cni-126107 crio[785]: time="2025-12-10T06:25:24.291622453Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:25:24 newest-cni-126107 crio[785]: time="2025-12-10T06:25:24.298361904Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:25:24 newest-cni-126107 crio[785]: time="2025-12-10T06:25:24.299395929Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:25:24 newest-cni-126107 crio[785]: time="2025-12-10T06:25:24.302064149Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:25:24 newest-cni-126107 crio[785]: time="2025-12-10T06:25:24.302725957Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:25:24 newest-cni-126107 crio[785]: time="2025-12-10T06:25:24.471965793Z" level=info msg="Created container 1985d25c9c472a19ba14819011523ee0ec537804ea81c821cffdef7df171e374: kube-system/kindnet-xj7td/kindnet-cni" id=ccab9237-f2ef-4acc-bd2c-03f96a05dc32 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:25:24 newest-cni-126107 crio[785]: time="2025-12-10T06:25:24.473052529Z" level=info msg="Starting container: 1985d25c9c472a19ba14819011523ee0ec537804ea81c821cffdef7df171e374" id=52216ed8-b106-472a-ab43-726508436524 name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 06:25:24 newest-cni-126107 crio[785]: time="2025-12-10T06:25:24.4736165Z" level=info msg="Created container 9adcd7b8620bd78a91326a8ee87b7d17663e4098cd84c6034dd2c27e44efa665: kube-system/kube-proxy-sxc9w/kube-proxy" id=494eb6bb-673e-45d2-bdbf-a711151980e9 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:25:24 newest-cni-126107 crio[785]: time="2025-12-10T06:25:24.474510923Z" level=info msg="Starting container: 9adcd7b8620bd78a91326a8ee87b7d17663e4098cd84c6034dd2c27e44efa665" id=d764c7c8-dac2-4b18-8aec-5cec3afeff51 name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 06:25:24 newest-cni-126107 crio[785]: time="2025-12-10T06:25:24.475332059Z" level=info msg="Started container" PID=1584 containerID=1985d25c9c472a19ba14819011523ee0ec537804ea81c821cffdef7df171e374 description=kube-system/kindnet-xj7td/kindnet-cni id=52216ed8-b106-472a-ab43-726508436524 name=/runtime.v1.RuntimeService/StartContainer sandboxID=73f5b45410cb62e617d60f7fe6fdc208f732c733f8251096ed59455a0ea33f66
	Dec 10 06:25:24 newest-cni-126107 crio[785]: time="2025-12-10T06:25:24.478246281Z" level=info msg="Started container" PID=1585 containerID=9adcd7b8620bd78a91326a8ee87b7d17663e4098cd84c6034dd2c27e44efa665 description=kube-system/kube-proxy-sxc9w/kube-proxy id=d764c7c8-dac2-4b18-8aec-5cec3afeff51 name=/runtime.v1.RuntimeService/StartContainer sandboxID=daa8730fa716c70565524260bd9db34485ce5bd17f097f4e87dc58e20f50a87e
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	9adcd7b8620bd       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810   2 seconds ago       Running             kube-proxy                0                   daa8730fa716c       kube-proxy-sxc9w                            kube-system
	1985d25c9c472       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   2 seconds ago       Running             kindnet-cni               0                   73f5b45410cb6       kindnet-xj7td                               kube-system
	a4ce7e4f86383       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b   12 seconds ago      Running             kube-apiserver            0                   df77b8ba8ff23       kube-apiserver-newest-cni-126107            kube-system
	f020688851ba1       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46   12 seconds ago      Running             kube-scheduler            0                   914bbefa59d64       kube-scheduler-newest-cni-126107            kube-system
	3e11996b77b11       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc   12 seconds ago      Running             kube-controller-manager   0                   b38a3610090bc       kube-controller-manager-newest-cni-126107   kube-system
	ac2f4ea7f92f4       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   12 seconds ago      Running             etcd                      0                   7b9500b3f51ea       etcd-newest-cni-126107                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-126107
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-126107
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=edc6abd3c0573b88c7a02dc35aa0b985627fa3e9
	                    minikube.k8s.io/name=newest-cni-126107
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T06_25_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 06:25:16 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-126107
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 06:25:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 06:25:18 +0000   Wed, 10 Dec 2025 06:25:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 06:25:18 +0000   Wed, 10 Dec 2025 06:25:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 06:25:18 +0000   Wed, 10 Dec 2025 06:25:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 10 Dec 2025 06:25:18 +0000   Wed, 10 Dec 2025 06:25:14 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-126107
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 0992b7e47f4f804d2f02c3066938a460
	  System UUID:                48dcb149-8660-4400-bf91-b049b5a968fc
	  Boot ID:                    cce7104c-1270-4b6b-af66-b04ce0de633c
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-126107                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8s
	  kube-system                 kindnet-xj7td                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      3s
	  kube-system                 kube-apiserver-newest-cni-126107             250m (3%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-controller-manager-newest-cni-126107    200m (2%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-proxy-sxc9w                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  kube-system                 kube-scheduler-newest-cni-126107             100m (1%)     0 (0%)      0 (0%)           0 (0%)         8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  4s    node-controller  Node newest-cni-126107 event: Registered Node newest-cni-126107 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 67 21 4d 5d 12 08 06
	[Dec10 06:21] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 7e b1 cc cb 4a c1 08 06
	[  +0.000451] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f6 67 21 4d 5d 12 08 06
	[ +47.984386] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 4e cf 84 e2 64 08 06
	[  +1.136322] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000001] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0e cf a5 c8 c4 7c 08 06
	[  +0.000001] ll header: 00000000: ff ff ff ff ff ff 4e 6d f9 62 31 99 08 06
	[Dec10 06:22] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 16 a6 95 b8 c2 8a 08 06
	[ +10.598490] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 42 35 90 e5 6e e9 08 06
	[  +0.000401] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 36 4e cf 84 e2 64 08 06
	[ +28.872835] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a 53 b5 51 38 03 08 06
	[  +0.000413] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4e 6d f9 62 31 99 08 06
	[  +9.820727] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6e c5 0b 85 ba 10 08 06
	[  +0.000485] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 16 a6 95 b8 c2 8a 08 06
	
	
	==> etcd [ac2f4ea7f92f4536f841d97e539d61a284fbcd93d88ce5e3c84a4976b548e4b1] <==
	{"level":"warn","ts":"2025-12-10T06:25:15.391287Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:25:15.398877Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:25:15.408725Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:25:15.416664Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:25:15.424506Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:25:15.432006Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:25:15.442978Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:25:15.450519Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:25:15.457066Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:25:15.464564Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:25:15.481743Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:25:15.488714Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:25:15.495603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:25:15.502855Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:25:15.509813Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:25:15.516861Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:25:15.524515Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:25:15.533483Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:25:15.541861Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:25:15.548843Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:25:15.566801Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:25:15.576750Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:25:15.583637Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:25:15.591772Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:25:15.647004Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33496","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 06:25:26 up  1:07,  0 user,  load average: 4.99, 4.86, 3.07
	Linux newest-cni-126107 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1985d25c9c472a19ba14819011523ee0ec537804ea81c821cffdef7df171e374] <==
	I1210 06:25:24.728189       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1210 06:25:24.728509       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1210 06:25:24.728715       1 main.go:148] setting mtu 1500 for CNI 
	I1210 06:25:24.728736       1 main.go:178] kindnetd IP family: "ipv4"
	I1210 06:25:24.728770       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-10T06:25:24Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1210 06:25:24.931900       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1210 06:25:24.931924       1 controller.go:381] "Waiting for informer caches to sync"
	I1210 06:25:24.931937       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1210 06:25:24.932115       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1210 06:25:25.332622       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1210 06:25:25.426485       1 metrics.go:72] Registering metrics
	I1210 06:25:25.426739       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [a4ce7e4f863835a01fa0832f4ccd52415fb6ff07cf2bcb1a9fbf030332418fe3] <==
	I1210 06:25:16.159375       1 default_servicecidr_controller.go:169] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1210 06:25:16.161172       1 controller.go:667] quota admission added evaluator for: namespaces
	I1210 06:25:16.163197       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 06:25:16.163371       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1210 06:25:16.167642       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1210 06:25:16.168496       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1210 06:25:16.170614       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 06:25:16.201003       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1210 06:25:17.064552       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1210 06:25:17.070784       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1210 06:25:17.070807       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1210 06:25:17.693021       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1210 06:25:17.747722       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1210 06:25:17.869898       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1210 06:25:17.877208       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1210 06:25:17.878437       1 controller.go:667] quota admission added evaluator for: endpoints
	I1210 06:25:17.883762       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1210 06:25:18.091137       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1210 06:25:18.711964       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1210 06:25:18.723431       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1210 06:25:18.732724       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1210 06:25:23.942617       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1210 06:25:23.994686       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 06:25:24.000236       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 06:25:24.042809       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [3e11996b77b11e5fa1c9970330de59944b1cf7f9c1fb23c760dadb3ddd123e81] <==
	I1210 06:25:22.897044       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1210 06:25:22.897079       1 shared_informer.go:377] "Caches are synced"
	I1210 06:25:22.897165       1 shared_informer.go:377] "Caches are synced"
	I1210 06:25:22.897194       1 shared_informer.go:377] "Caches are synced"
	I1210 06:25:22.897333       1 shared_informer.go:377] "Caches are synced"
	I1210 06:25:22.897430       1 shared_informer.go:377] "Caches are synced"
	I1210 06:25:22.897501       1 shared_informer.go:377] "Caches are synced"
	I1210 06:25:22.897537       1 shared_informer.go:377] "Caches are synced"
	I1210 06:25:22.898295       1 shared_informer.go:377] "Caches are synced"
	I1210 06:25:22.898371       1 shared_informer.go:377] "Caches are synced"
	I1210 06:25:22.898525       1 shared_informer.go:377] "Caches are synced"
	I1210 06:25:22.898602       1 shared_informer.go:377] "Caches are synced"
	I1210 06:25:22.899641       1 shared_informer.go:377] "Caches are synced"
	I1210 06:25:22.901888       1 shared_informer.go:377] "Caches are synced"
	I1210 06:25:22.901995       1 shared_informer.go:377] "Caches are synced"
	I1210 06:25:22.902129       1 shared_informer.go:377] "Caches are synced"
	I1210 06:25:22.902419       1 shared_informer.go:377] "Caches are synced"
	I1210 06:25:22.902620       1 shared_informer.go:377] "Caches are synced"
	I1210 06:25:22.903699       1 shared_informer.go:370] "Waiting for caches to sync"
	I1210 06:25:22.906372       1 shared_informer.go:377] "Caches are synced"
	I1210 06:25:22.910497       1 range_allocator.go:433] "Set node PodCIDR" node="newest-cni-126107" podCIDRs=["10.42.0.0/24"]
	I1210 06:25:23.002787       1 shared_informer.go:377] "Caches are synced"
	I1210 06:25:23.002815       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1210 06:25:23.002822       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1210 06:25:23.003990       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [9adcd7b8620bd78a91326a8ee87b7d17663e4098cd84c6034dd2c27e44efa665] <==
	I1210 06:25:24.537871       1 server_linux.go:53] "Using iptables proxy"
	I1210 06:25:24.624052       1 shared_informer.go:370] "Waiting for caches to sync"
	I1210 06:25:24.727416       1 shared_informer.go:377] "Caches are synced"
	I1210 06:25:24.727495       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1210 06:25:24.727627       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 06:25:24.750771       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1210 06:25:24.750836       1 server_linux.go:136] "Using iptables Proxier"
	I1210 06:25:24.757400       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 06:25:24.758262       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1210 06:25:24.758447       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 06:25:24.761264       1 config.go:200] "Starting service config controller"
	I1210 06:25:24.761348       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 06:25:24.761402       1 config.go:106] "Starting endpoint slice config controller"
	I1210 06:25:24.761439       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 06:25:24.761523       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 06:25:24.761567       1 config.go:309] "Starting node config controller"
	I1210 06:25:24.761584       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 06:25:24.761586       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 06:25:24.861553       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1210 06:25:24.861676       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1210 06:25:24.861681       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1210 06:25:24.861700       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [f020688851ba1528dee4b5631d3e627ad0b3f1e1b86b0ab80adc63583a61e8b1] <==
	E1210 06:25:17.086772       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope"
	E1210 06:25:17.088291       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1210 06:25:17.123235       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicationcontrollers\" in API group \"\" at the cluster scope"
	E1210 06:25:17.124576       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1210 06:25:17.146842       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope"
	E1210 06:25:17.148095       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1210 06:25:17.194989       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope"
	E1210 06:25:17.196298       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1210 06:25:17.245301       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot watch resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope"
	E1210 06:25:17.249700       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1210 06:25:17.258303       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope"
	E1210 06:25:17.259639       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1210 06:25:17.283028       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="pods is forbidden: User \"system:kube-scheduler\" cannot watch resource \"pods\" in API group \"\" at the cluster scope"
	E1210 06:25:17.284263       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1210 06:25:17.328808       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope"
	E1210 06:25:17.330071       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1210 06:25:17.365552       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope"
	E1210 06:25:17.366903       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1210 06:25:17.392350       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope"
	E1210 06:25:17.393639       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1210 06:25:17.427027       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicasets\" in API group \"apps\" at the cluster scope"
	E1210 06:25:17.428150       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1210 06:25:17.512002       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\""
	E1210 06:25:17.513244       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	I1210 06:25:19.417881       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 10 06:25:19 newest-cni-126107 kubelet[1320]: E1210 06:25:19.618799    1320 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-126107\" already exists" pod="kube-system/kube-apiserver-newest-cni-126107"
	Dec 10 06:25:19 newest-cni-126107 kubelet[1320]: E1210 06:25:19.618882    1320 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-126107" containerName="kube-apiserver"
	Dec 10 06:25:19 newest-cni-126107 kubelet[1320]: I1210 06:25:19.649581    1320 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-126107" podStartSLOduration=1.6495594260000002 podStartE2EDuration="1.649559426s" podCreationTimestamp="2025-12-10 06:25:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 06:25:19.635727006 +0000 UTC m=+1.163292282" watchObservedRunningTime="2025-12-10 06:25:19.649559426 +0000 UTC m=+1.177124684"
	Dec 10 06:25:19 newest-cni-126107 kubelet[1320]: I1210 06:25:19.664731    1320 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-126107" podStartSLOduration=1.664710847 podStartE2EDuration="1.664710847s" podCreationTimestamp="2025-12-10 06:25:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 06:25:19.649816698 +0000 UTC m=+1.177381956" watchObservedRunningTime="2025-12-10 06:25:19.664710847 +0000 UTC m=+1.192276106"
	Dec 10 06:25:19 newest-cni-126107 kubelet[1320]: I1210 06:25:19.664947    1320 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-126107" podStartSLOduration=1.66493722 podStartE2EDuration="1.66493722s" podCreationTimestamp="2025-12-10 06:25:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 06:25:19.664400849 +0000 UTC m=+1.191966106" watchObservedRunningTime="2025-12-10 06:25:19.66493722 +0000 UTC m=+1.192502476"
	Dec 10 06:25:19 newest-cni-126107 kubelet[1320]: I1210 06:25:19.676008    1320 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-126107" podStartSLOduration=1.675992007 podStartE2EDuration="1.675992007s" podCreationTimestamp="2025-12-10 06:25:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 06:25:19.675958095 +0000 UTC m=+1.203523353" watchObservedRunningTime="2025-12-10 06:25:19.675992007 +0000 UTC m=+1.203557276"
	Dec 10 06:25:20 newest-cni-126107 kubelet[1320]: E1210 06:25:20.607540    1320 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-126107" containerName="kube-apiserver"
	Dec 10 06:25:20 newest-cni-126107 kubelet[1320]: E1210 06:25:20.607622    1320 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-126107" containerName="kube-controller-manager"
	Dec 10 06:25:20 newest-cni-126107 kubelet[1320]: E1210 06:25:20.607911    1320 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-126107" containerName="etcd"
	Dec 10 06:25:20 newest-cni-126107 kubelet[1320]: E1210 06:25:20.608016    1320 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-126107" containerName="kube-scheduler"
	Dec 10 06:25:21 newest-cni-126107 kubelet[1320]: E1210 06:25:21.609067    1320 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-126107" containerName="kube-scheduler"
	Dec 10 06:25:21 newest-cni-126107 kubelet[1320]: E1210 06:25:21.609181    1320 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-126107" containerName="etcd"
	Dec 10 06:25:21 newest-cni-126107 kubelet[1320]: E1210 06:25:21.609304    1320 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-126107" containerName="kube-apiserver"
	Dec 10 06:25:23 newest-cni-126107 kubelet[1320]: I1210 06:25:23.014347    1320 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Dec 10 06:25:23 newest-cni-126107 kubelet[1320]: I1210 06:25:23.015533    1320 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Dec 10 06:25:23 newest-cni-126107 kubelet[1320]: I1210 06:25:23.990328    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kwxj\" (UniqueName: \"kubernetes.io/projected/3cf83d19-8dae-4734-bdb5-0ce2410f4c99-kube-api-access-4kwxj\") pod \"kindnet-xj7td\" (UID: \"3cf83d19-8dae-4734-bdb5-0ce2410f4c99\") " pod="kube-system/kindnet-xj7td"
	Dec 10 06:25:23 newest-cni-126107 kubelet[1320]: I1210 06:25:23.990383    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7bc19225-90f1-4759-bb4f-bc2da959865d-xtables-lock\") pod \"kube-proxy-sxc9w\" (UID: \"7bc19225-90f1-4759-bb4f-bc2da959865d\") " pod="kube-system/kube-proxy-sxc9w"
	Dec 10 06:25:23 newest-cni-126107 kubelet[1320]: I1210 06:25:23.990411    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kq9q\" (UniqueName: \"kubernetes.io/projected/7bc19225-90f1-4759-bb4f-bc2da959865d-kube-api-access-8kq9q\") pod \"kube-proxy-sxc9w\" (UID: \"7bc19225-90f1-4759-bb4f-bc2da959865d\") " pod="kube-system/kube-proxy-sxc9w"
	Dec 10 06:25:23 newest-cni-126107 kubelet[1320]: I1210 06:25:23.990557    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7bc19225-90f1-4759-bb4f-bc2da959865d-kube-proxy\") pod \"kube-proxy-sxc9w\" (UID: \"7bc19225-90f1-4759-bb4f-bc2da959865d\") " pod="kube-system/kube-proxy-sxc9w"
	Dec 10 06:25:23 newest-cni-126107 kubelet[1320]: I1210 06:25:23.990642    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7bc19225-90f1-4759-bb4f-bc2da959865d-lib-modules\") pod \"kube-proxy-sxc9w\" (UID: \"7bc19225-90f1-4759-bb4f-bc2da959865d\") " pod="kube-system/kube-proxy-sxc9w"
	Dec 10 06:25:23 newest-cni-126107 kubelet[1320]: I1210 06:25:23.990720    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/3cf83d19-8dae-4734-bdb5-0ce2410f4c99-cni-cfg\") pod \"kindnet-xj7td\" (UID: \"3cf83d19-8dae-4734-bdb5-0ce2410f4c99\") " pod="kube-system/kindnet-xj7td"
	Dec 10 06:25:23 newest-cni-126107 kubelet[1320]: I1210 06:25:23.990752    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3cf83d19-8dae-4734-bdb5-0ce2410f4c99-lib-modules\") pod \"kindnet-xj7td\" (UID: \"3cf83d19-8dae-4734-bdb5-0ce2410f4c99\") " pod="kube-system/kindnet-xj7td"
	Dec 10 06:25:23 newest-cni-126107 kubelet[1320]: I1210 06:25:23.990785    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3cf83d19-8dae-4734-bdb5-0ce2410f4c99-xtables-lock\") pod \"kindnet-xj7td\" (UID: \"3cf83d19-8dae-4734-bdb5-0ce2410f4c99\") " pod="kube-system/kindnet-xj7td"
	Dec 10 06:25:24 newest-cni-126107 kubelet[1320]: E1210 06:25:24.119223    1320 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-126107" containerName="etcd"
	Dec 10 06:25:24 newest-cni-126107 kubelet[1320]: I1210 06:25:24.634091    1320 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-xj7td" podStartSLOduration=1.634071772 podStartE2EDuration="1.634071772s" podCreationTimestamp="2025-12-10 06:25:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 06:25:24.633285936 +0000 UTC m=+6.160851207" watchObservedRunningTime="2025-12-10 06:25:24.634071772 +0000 UTC m=+6.161637032"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-126107 -n newest-cni-126107
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-126107 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-rsznm storage-provisioner
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-126107 describe pod coredns-7d764666f9-rsznm storage-provisioner
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-126107 describe pod coredns-7d764666f9-rsznm storage-provisioner: exit status 1 (60.568279ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-rsznm" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-126107 describe pod coredns-7d764666f9-rsznm storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (6.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-643991 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-diff-port-643991 --alsologtostderr -v=1: exit status 80 (2.220268741s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-643991 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 06:25:33.585677  346166 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:25:33.585986  346166 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:25:33.585998  346166 out.go:374] Setting ErrFile to fd 2...
	I1210 06:25:33.586002  346166 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:25:33.586224  346166 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8832/.minikube/bin
	I1210 06:25:33.586454  346166 out.go:368] Setting JSON to false
	I1210 06:25:33.586483  346166 mustload.go:66] Loading cluster: default-k8s-diff-port-643991
	I1210 06:25:33.586813  346166 config.go:182] Loaded profile config "default-k8s-diff-port-643991": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 06:25:33.587199  346166 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-643991 --format={{.State.Status}}
	I1210 06:25:33.605843  346166 host.go:66] Checking if "default-k8s-diff-port-643991" exists ...
	I1210 06:25:33.606125  346166 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:25:33.661425  346166 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:65 SystemTime:2025-12-10 06:25:33.651683564 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:25:33.662047  346166 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765151505-21409/minikube-v1.37.0-1765151505-21409-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765151505-21409-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-643991 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1210 06:25:33.664364  346166 out.go:179] * Pausing node default-k8s-diff-port-643991 ... 
	I1210 06:25:33.665619  346166 host.go:66] Checking if "default-k8s-diff-port-643991" exists ...
	I1210 06:25:33.665890  346166 ssh_runner.go:195] Run: systemctl --version
	I1210 06:25:33.665926  346166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-643991
	I1210 06:25:33.685310  346166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/default-k8s-diff-port-643991/id_rsa Username:docker}
	I1210 06:25:33.780933  346166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:25:33.803385  346166 pause.go:52] kubelet running: true
	I1210 06:25:33.803484  346166 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1210 06:25:33.972507  346166 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1210 06:25:33.972581  346166 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1210 06:25:34.045743  346166 cri.go:89] found id: "86d39ffede9e7dae753d54daef1a4829b9a1da4b5187509b5e41ec4e0a49ad66"
	I1210 06:25:34.045767  346166 cri.go:89] found id: "bbaf4ecac9fe942940a527bc2c9d5ac2c8852370073f22a48aa6290e24b8f4fc"
	I1210 06:25:34.045772  346166 cri.go:89] found id: "08a9a203d5bf21ac5648ef3fb7638884576e075da5e173791b5158e58f55d0a9"
	I1210 06:25:34.045775  346166 cri.go:89] found id: "6c44f81745b509fdff07279555f62e28970767ad905fb283b7ad65af0a2c26ad"
	I1210 06:25:34.045778  346166 cri.go:89] found id: "fbe2e6e498d241f7b238abce28affa8ff3c952a61083e0b428a07c7fc0bac104"
	I1210 06:25:34.045782  346166 cri.go:89] found id: "8cb208db605620bb50e399feca07150e2a59edcd3b1bef56613bc9bf58d33577"
	I1210 06:25:34.045784  346166 cri.go:89] found id: "939f270b7e90898a7f21a52e2572b0814d28cd556fbbc16d377a84363bcff231"
	I1210 06:25:34.045787  346166 cri.go:89] found id: "9b258fc04f844289ade513f0963c9827dce6e9c67835e2e2ffc484b28ca58cb9"
	I1210 06:25:34.045790  346166 cri.go:89] found id: "e3522bb390040c1d32dccb4cfcacd9939770bc3064f9bb9dac4051ec77431f13"
	I1210 06:25:34.045799  346166 cri.go:89] found id: "b4a88d6bb8c02020a551d52825b702426f9b22f824c2ca79978cc5ac8b432bfe"
	I1210 06:25:34.045802  346166 cri.go:89] found id: "422df2b0f39e08d2d4c6e2bf49639bf603b80a30c5394dde35005c632322f85f"
	I1210 06:25:34.045804  346166 cri.go:89] found id: ""
	I1210 06:25:34.045840  346166 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 06:25:34.058610  346166 retry.go:31] will retry after 223.274964ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:25:34Z" level=error msg="open /run/runc: no such file or directory"
	I1210 06:25:34.282066  346166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:25:34.295708  346166 pause.go:52] kubelet running: false
	I1210 06:25:34.295761  346166 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1210 06:25:34.445600  346166 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1210 06:25:34.445684  346166 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1210 06:25:34.517904  346166 cri.go:89] found id: "86d39ffede9e7dae753d54daef1a4829b9a1da4b5187509b5e41ec4e0a49ad66"
	I1210 06:25:34.517924  346166 cri.go:89] found id: "bbaf4ecac9fe942940a527bc2c9d5ac2c8852370073f22a48aa6290e24b8f4fc"
	I1210 06:25:34.517928  346166 cri.go:89] found id: "08a9a203d5bf21ac5648ef3fb7638884576e075da5e173791b5158e58f55d0a9"
	I1210 06:25:34.517931  346166 cri.go:89] found id: "6c44f81745b509fdff07279555f62e28970767ad905fb283b7ad65af0a2c26ad"
	I1210 06:25:34.517934  346166 cri.go:89] found id: "fbe2e6e498d241f7b238abce28affa8ff3c952a61083e0b428a07c7fc0bac104"
	I1210 06:25:34.517938  346166 cri.go:89] found id: "8cb208db605620bb50e399feca07150e2a59edcd3b1bef56613bc9bf58d33577"
	I1210 06:25:34.517941  346166 cri.go:89] found id: "939f270b7e90898a7f21a52e2572b0814d28cd556fbbc16d377a84363bcff231"
	I1210 06:25:34.517943  346166 cri.go:89] found id: "9b258fc04f844289ade513f0963c9827dce6e9c67835e2e2ffc484b28ca58cb9"
	I1210 06:25:34.517946  346166 cri.go:89] found id: "e3522bb390040c1d32dccb4cfcacd9939770bc3064f9bb9dac4051ec77431f13"
	I1210 06:25:34.517952  346166 cri.go:89] found id: "b4a88d6bb8c02020a551d52825b702426f9b22f824c2ca79978cc5ac8b432bfe"
	I1210 06:25:34.517954  346166 cri.go:89] found id: "422df2b0f39e08d2d4c6e2bf49639bf603b80a30c5394dde35005c632322f85f"
	I1210 06:25:34.517957  346166 cri.go:89] found id: ""
	I1210 06:25:34.517992  346166 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 06:25:34.530294  346166 retry.go:31] will retry after 218.924557ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:25:34Z" level=error msg="open /run/runc: no such file or directory"
	I1210 06:25:34.749808  346166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:25:34.764115  346166 pause.go:52] kubelet running: false
	I1210 06:25:34.764191  346166 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1210 06:25:34.916329  346166 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1210 06:25:34.916425  346166 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1210 06:25:34.987090  346166 cri.go:89] found id: "86d39ffede9e7dae753d54daef1a4829b9a1da4b5187509b5e41ec4e0a49ad66"
	I1210 06:25:34.987115  346166 cri.go:89] found id: "bbaf4ecac9fe942940a527bc2c9d5ac2c8852370073f22a48aa6290e24b8f4fc"
	I1210 06:25:34.987121  346166 cri.go:89] found id: "08a9a203d5bf21ac5648ef3fb7638884576e075da5e173791b5158e58f55d0a9"
	I1210 06:25:34.987126  346166 cri.go:89] found id: "6c44f81745b509fdff07279555f62e28970767ad905fb283b7ad65af0a2c26ad"
	I1210 06:25:34.987131  346166 cri.go:89] found id: "fbe2e6e498d241f7b238abce28affa8ff3c952a61083e0b428a07c7fc0bac104"
	I1210 06:25:34.987136  346166 cri.go:89] found id: "8cb208db605620bb50e399feca07150e2a59edcd3b1bef56613bc9bf58d33577"
	I1210 06:25:34.987141  346166 cri.go:89] found id: "939f270b7e90898a7f21a52e2572b0814d28cd556fbbc16d377a84363bcff231"
	I1210 06:25:34.987145  346166 cri.go:89] found id: "9b258fc04f844289ade513f0963c9827dce6e9c67835e2e2ffc484b28ca58cb9"
	I1210 06:25:34.987149  346166 cri.go:89] found id: "e3522bb390040c1d32dccb4cfcacd9939770bc3064f9bb9dac4051ec77431f13"
	I1210 06:25:34.987167  346166 cri.go:89] found id: "b4a88d6bb8c02020a551d52825b702426f9b22f824c2ca79978cc5ac8b432bfe"
	I1210 06:25:34.987176  346166 cri.go:89] found id: "422df2b0f39e08d2d4c6e2bf49639bf603b80a30c5394dde35005c632322f85f"
	I1210 06:25:34.987180  346166 cri.go:89] found id: ""
	I1210 06:25:34.987229  346166 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 06:25:35.001157  346166 retry.go:31] will retry after 486.826201ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:25:34Z" level=error msg="open /run/runc: no such file or directory"
	I1210 06:25:35.488730  346166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:25:35.501976  346166 pause.go:52] kubelet running: false
	I1210 06:25:35.502036  346166 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1210 06:25:35.649260  346166 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1210 06:25:35.649364  346166 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1210 06:25:35.720906  346166 cri.go:89] found id: "86d39ffede9e7dae753d54daef1a4829b9a1da4b5187509b5e41ec4e0a49ad66"
	I1210 06:25:35.720929  346166 cri.go:89] found id: "bbaf4ecac9fe942940a527bc2c9d5ac2c8852370073f22a48aa6290e24b8f4fc"
	I1210 06:25:35.720935  346166 cri.go:89] found id: "08a9a203d5bf21ac5648ef3fb7638884576e075da5e173791b5158e58f55d0a9"
	I1210 06:25:35.720940  346166 cri.go:89] found id: "6c44f81745b509fdff07279555f62e28970767ad905fb283b7ad65af0a2c26ad"
	I1210 06:25:35.720944  346166 cri.go:89] found id: "fbe2e6e498d241f7b238abce28affa8ff3c952a61083e0b428a07c7fc0bac104"
	I1210 06:25:35.720950  346166 cri.go:89] found id: "8cb208db605620bb50e399feca07150e2a59edcd3b1bef56613bc9bf58d33577"
	I1210 06:25:35.720954  346166 cri.go:89] found id: "939f270b7e90898a7f21a52e2572b0814d28cd556fbbc16d377a84363bcff231"
	I1210 06:25:35.720957  346166 cri.go:89] found id: "9b258fc04f844289ade513f0963c9827dce6e9c67835e2e2ffc484b28ca58cb9"
	I1210 06:25:35.720959  346166 cri.go:89] found id: "e3522bb390040c1d32dccb4cfcacd9939770bc3064f9bb9dac4051ec77431f13"
	I1210 06:25:35.720966  346166 cri.go:89] found id: "b4a88d6bb8c02020a551d52825b702426f9b22f824c2ca79978cc5ac8b432bfe"
	I1210 06:25:35.720971  346166 cri.go:89] found id: "422df2b0f39e08d2d4c6e2bf49639bf603b80a30c5394dde35005c632322f85f"
	I1210 06:25:35.720975  346166 cri.go:89] found id: ""
	I1210 06:25:35.721032  346166 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 06:25:35.735877  346166 out.go:203] 
	W1210 06:25:35.737324  346166 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:25:35Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:25:35Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 06:25:35.737353  346166 out.go:285] * 
	* 
	W1210 06:25:35.741849  346166 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 06:25:35.744218  346166 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p default-k8s-diff-port-643991 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-643991
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-643991:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "acbf5c836807542e08b70cb1897e4c8cb6cabdd645d3167a86ed0db13940e484",
	        "Created": "2025-12-10T06:23:22.165163212Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 331400,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T06:24:29.714169609Z",
	            "FinishedAt": "2025-12-10T06:24:28.627745692Z"
	        },
	        "Image": "sha256:9dfcc37acf4d8ed51daae49d651516447e95ced4bb0b0783e8c53cb79a74f008",
	        "ResolvConfPath": "/var/lib/docker/containers/acbf5c836807542e08b70cb1897e4c8cb6cabdd645d3167a86ed0db13940e484/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/acbf5c836807542e08b70cb1897e4c8cb6cabdd645d3167a86ed0db13940e484/hostname",
	        "HostsPath": "/var/lib/docker/containers/acbf5c836807542e08b70cb1897e4c8cb6cabdd645d3167a86ed0db13940e484/hosts",
	        "LogPath": "/var/lib/docker/containers/acbf5c836807542e08b70cb1897e4c8cb6cabdd645d3167a86ed0db13940e484/acbf5c836807542e08b70cb1897e4c8cb6cabdd645d3167a86ed0db13940e484-json.log",
	        "Name": "/default-k8s-diff-port-643991",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-643991:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-643991",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "acbf5c836807542e08b70cb1897e4c8cb6cabdd645d3167a86ed0db13940e484",
	                "LowerDir": "/var/lib/docker/overlay2/cf1f161019268f5442645519aa310b9ea0a75bd69c7663b67c3505eec1791fb3-init/diff:/var/lib/docker/overlay2/5745aee6e8b05b3a4cc4ad6aee891df9d6438d830895f70bd2a764a976802708/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cf1f161019268f5442645519aa310b9ea0a75bd69c7663b67c3505eec1791fb3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cf1f161019268f5442645519aa310b9ea0a75bd69c7663b67c3505eec1791fb3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cf1f161019268f5442645519aa310b9ea0a75bd69c7663b67c3505eec1791fb3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-643991",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-643991/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-643991",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-643991",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-643991",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "ced51ff946f764b9ea868e0ca91b0881af554e482a4812176dbdfa8ace024bd9",
	            "SandboxKey": "/var/run/docker/netns/ced51ff946f7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33130"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33131"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33132"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-643991": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0a24a8ad90ffaf1aa41a72e3c38eeed58406686f85b6ce46090a934c6571e421",
	                    "EndpointID": "65c176ec3f15ee6d6e3e50c7463c9927f468423199a9523673581821eed10f75",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "12:48:ad:96:f6:c0",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-643991",
	                        "acbf5c836807"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-643991 -n default-k8s-diff-port-643991
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-643991 -n default-k8s-diff-port-643991: exit status 2 (336.776849ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-643991 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-643991 logs -n 25: (1.269391659s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ addons  │ enable dashboard -p no-preload-713838 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-713838            │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:24 UTC │
	│ start   │ -p no-preload-713838 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-713838            │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:25 UTC │
	│ addons  │ enable dashboard -p embed-certs-133470 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-133470           │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:24 UTC │
	│ start   │ -p embed-certs-133470 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-133470           │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:25 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-643991 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-643991 │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:24 UTC │
	│ start   │ -p default-k8s-diff-port-643991 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-643991 │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:25 UTC │
	│ image   │ old-k8s-version-424086 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-424086       │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:24 UTC │
	│ pause   │ -p old-k8s-version-424086 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-424086       │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │                     │
	│ delete  │ -p old-k8s-version-424086                                                                                                                                                                                                                            │ old-k8s-version-424086       │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:24 UTC │
	│ delete  │ -p old-k8s-version-424086                                                                                                                                                                                                                            │ old-k8s-version-424086       │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:24 UTC │
	│ start   │ -p newest-cni-126107 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-126107            │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:25 UTC │
	│ image   │ no-preload-713838 image list --format=json                                                                                                                                                                                                           │ no-preload-713838            │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │ 10 Dec 25 06:25 UTC │
	│ pause   │ -p no-preload-713838 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-713838            │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │                     │
	│ delete  │ -p no-preload-713838                                                                                                                                                                                                                                 │ no-preload-713838            │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │ 10 Dec 25 06:25 UTC │
	│ image   │ embed-certs-133470 image list --format=json                                                                                                                                                                                                          │ embed-certs-133470           │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │ 10 Dec 25 06:25 UTC │
	│ pause   │ -p embed-certs-133470 --alsologtostderr -v=1                                                                                                                                                                                                         │ embed-certs-133470           │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │                     │
	│ delete  │ -p no-preload-713838                                                                                                                                                                                                                                 │ no-preload-713838            │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │ 10 Dec 25 06:25 UTC │
	│ addons  │ enable metrics-server -p newest-cni-126107 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-126107            │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │                     │
	│ delete  │ -p embed-certs-133470                                                                                                                                                                                                                                │ embed-certs-133470           │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │ 10 Dec 25 06:25 UTC │
	│ stop    │ -p newest-cni-126107 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-126107            │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │ 10 Dec 25 06:25 UTC │
	│ delete  │ -p embed-certs-133470                                                                                                                                                                                                                                │ embed-certs-133470           │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │ 10 Dec 25 06:25 UTC │
	│ addons  │ enable dashboard -p newest-cni-126107 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-126107            │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │ 10 Dec 25 06:25 UTC │
	│ start   │ -p newest-cni-126107 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-126107            │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │                     │
	│ image   │ default-k8s-diff-port-643991 image list --format=json                                                                                                                                                                                                │ default-k8s-diff-port-643991 │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │ 10 Dec 25 06:25 UTC │
	│ pause   │ -p default-k8s-diff-port-643991 --alsologtostderr -v=1                                                                                                                                                                                               │ default-k8s-diff-port-643991 │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:25:30
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:25:30.109871  345537 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:25:30.110102  345537 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:25:30.110112  345537 out.go:374] Setting ErrFile to fd 2...
	I1210 06:25:30.110116  345537 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:25:30.110304  345537 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8832/.minikube/bin
	I1210 06:25:30.110756  345537 out.go:368] Setting JSON to false
	I1210 06:25:30.111768  345537 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4081,"bootTime":1765343849,"procs":254,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 06:25:30.111827  345537 start.go:143] virtualization: kvm guest
	I1210 06:25:30.113927  345537 out.go:179] * [newest-cni-126107] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 06:25:30.115752  345537 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 06:25:30.115752  345537 notify.go:221] Checking for updates...
	I1210 06:25:30.118522  345537 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:25:30.119763  345537 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-8832/kubeconfig
	I1210 06:25:30.121229  345537 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-8832/.minikube
	I1210 06:25:30.122829  345537 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 06:25:30.124211  345537 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:25:30.126209  345537 config.go:182] Loaded profile config "newest-cni-126107": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 06:25:30.126830  345537 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:25:30.151836  345537 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 06:25:30.151928  345537 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:25:30.210924  345537 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:false NGoroutines:54 SystemTime:2025-12-10 06:25:30.200725078 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:25:30.211055  345537 docker.go:319] overlay module found
	I1210 06:25:30.212977  345537 out.go:179] * Using the docker driver based on existing profile
	I1210 06:25:30.214243  345537 start.go:309] selected driver: docker
	I1210 06:25:30.214258  345537 start.go:927] validating driver "docker" against &{Name:newest-cni-126107 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-126107 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:25:30.214369  345537 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:25:30.215062  345537 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:25:30.276019  345537 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:false NGoroutines:54 SystemTime:2025-12-10 06:25:30.266281878 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:25:30.276342  345537 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1210 06:25:30.276370  345537 cni.go:84] Creating CNI manager for ""
	I1210 06:25:30.276425  345537 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:25:30.276460  345537 start.go:353] cluster config:
	{Name:newest-cni-126107 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-126107 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:25:30.278593  345537 out.go:179] * Starting "newest-cni-126107" primary control-plane node in "newest-cni-126107" cluster
	I1210 06:25:30.279972  345537 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 06:25:30.281412  345537 out.go:179] * Pulling base image v0.0.48-1765319469-22089 ...
	I1210 06:25:30.282704  345537 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1210 06:25:30.282744  345537 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-8832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1210 06:25:30.282754  345537 cache.go:65] Caching tarball of preloaded images
	I1210 06:25:30.282808  345537 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon
	I1210 06:25:30.282846  345537 preload.go:238] Found /home/jenkins/minikube-integration/22089-8832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 06:25:30.282857  345537 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1210 06:25:30.282949  345537 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/config.json ...
	I1210 06:25:30.303637  345537 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon, skipping pull
	I1210 06:25:30.303656  345537 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca exists in daemon, skipping load
	I1210 06:25:30.303670  345537 cache.go:243] Successfully downloaded all kic artifacts
	I1210 06:25:30.303700  345537 start.go:360] acquireMachinesLock for newest-cni-126107: {Name:mk95835e60131d01841dcfa433d5776bf10a491c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:25:30.303753  345537 start.go:364] duration metric: took 36.893µs to acquireMachinesLock for "newest-cni-126107"
	I1210 06:25:30.303770  345537 start.go:96] Skipping create...Using existing machine configuration
	I1210 06:25:30.303776  345537 fix.go:54] fixHost starting: 
	I1210 06:25:30.303978  345537 cli_runner.go:164] Run: docker container inspect newest-cni-126107 --format={{.State.Status}}
	I1210 06:25:30.322589  345537 fix.go:112] recreateIfNeeded on newest-cni-126107: state=Stopped err=<nil>
	W1210 06:25:30.322625  345537 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 06:25:30.324708  345537 out.go:252] * Restarting existing docker container for "newest-cni-126107" ...
	I1210 06:25:30.324786  345537 cli_runner.go:164] Run: docker start newest-cni-126107
	I1210 06:25:30.586048  345537 cli_runner.go:164] Run: docker container inspect newest-cni-126107 --format={{.State.Status}}
	I1210 06:25:30.606349  345537 kic.go:430] container "newest-cni-126107" state is running.
	I1210 06:25:30.606765  345537 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-126107
	I1210 06:25:30.626578  345537 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/config.json ...
	I1210 06:25:30.626856  345537 machine.go:94] provisionDockerMachine start ...
	I1210 06:25:30.626926  345537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:30.645878  345537 main.go:143] libmachine: Using SSH client type: native
	I1210 06:25:30.646136  345537 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I1210 06:25:30.646149  345537 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 06:25:30.646758  345537 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53190->127.0.0.1:33139: read: connection reset by peer
	I1210 06:25:33.780525  345537 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-126107
	
	I1210 06:25:33.780558  345537 ubuntu.go:182] provisioning hostname "newest-cni-126107"
	I1210 06:25:33.780660  345537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:33.800442  345537 main.go:143] libmachine: Using SSH client type: native
	I1210 06:25:33.800684  345537 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I1210 06:25:33.800700  345537 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-126107 && echo "newest-cni-126107" | sudo tee /etc/hostname
	I1210 06:25:33.947960  345537 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-126107
	
	I1210 06:25:33.948061  345537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:33.968186  345537 main.go:143] libmachine: Using SSH client type: native
	I1210 06:25:33.968388  345537 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I1210 06:25:33.968404  345537 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-126107' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-126107/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-126107' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 06:25:34.105319  345537 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:25:34.105346  345537 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22089-8832/.minikube CaCertPath:/home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22089-8832/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22089-8832/.minikube}
	I1210 06:25:34.105372  345537 ubuntu.go:190] setting up certificates
	I1210 06:25:34.105382  345537 provision.go:84] configureAuth start
	I1210 06:25:34.105437  345537 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-126107
	I1210 06:25:34.124550  345537 provision.go:143] copyHostCerts
	I1210 06:25:34.124621  345537 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-8832/.minikube/ca.pem, removing ...
	I1210 06:25:34.124635  345537 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-8832/.minikube/ca.pem
	I1210 06:25:34.124709  345537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22089-8832/.minikube/ca.pem (1078 bytes)
	I1210 06:25:34.124824  345537 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-8832/.minikube/cert.pem, removing ...
	I1210 06:25:34.124833  345537 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-8832/.minikube/cert.pem
	I1210 06:25:34.124860  345537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22089-8832/.minikube/cert.pem (1123 bytes)
	I1210 06:25:34.124930  345537 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-8832/.minikube/key.pem, removing ...
	I1210 06:25:34.124937  345537 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-8832/.minikube/key.pem
	I1210 06:25:34.124961  345537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22089-8832/.minikube/key.pem (1675 bytes)
	I1210 06:25:34.125025  345537 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22089-8832/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca-key.pem org=jenkins.newest-cni-126107 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-126107]
	I1210 06:25:34.193303  345537 provision.go:177] copyRemoteCerts
	I1210 06:25:34.193367  345537 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 06:25:34.193402  345537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:34.212955  345537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/newest-cni-126107/id_rsa Username:docker}
	I1210 06:25:34.311230  345537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 06:25:34.330510  345537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 06:25:34.354851  345537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 06:25:34.373898  345537 provision.go:87] duration metric: took 268.50473ms to configureAuth
	I1210 06:25:34.373925  345537 ubuntu.go:206] setting minikube options for container-runtime
	I1210 06:25:34.374105  345537 config.go:182] Loaded profile config "newest-cni-126107": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 06:25:34.374216  345537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:34.394026  345537 main.go:143] libmachine: Using SSH client type: native
	I1210 06:25:34.394302  345537 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I1210 06:25:34.394331  345537 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 06:25:34.690980  345537 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 06:25:34.691015  345537 machine.go:97] duration metric: took 4.064140388s to provisionDockerMachine
	I1210 06:25:34.691029  345537 start.go:293] postStartSetup for "newest-cni-126107" (driver="docker")
	I1210 06:25:34.691080  345537 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 06:25:34.691147  345537 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 06:25:34.691183  345537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:34.710980  345537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/newest-cni-126107/id_rsa Username:docker}
	I1210 06:25:34.809193  345537 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 06:25:34.813269  345537 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 06:25:34.813313  345537 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 06:25:34.813327  345537 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-8832/.minikube/addons for local assets ...
	I1210 06:25:34.813383  345537 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-8832/.minikube/files for local assets ...
	I1210 06:25:34.813505  345537 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-8832/.minikube/files/etc/ssl/certs/123742.pem -> 123742.pem in /etc/ssl/certs
	I1210 06:25:34.813619  345537 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 06:25:34.821859  345537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/files/etc/ssl/certs/123742.pem --> /etc/ssl/certs/123742.pem (1708 bytes)
	I1210 06:25:34.840261  345537 start.go:296] duration metric: took 149.200393ms for postStartSetup
	I1210 06:25:34.840342  345537 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:25:34.840397  345537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:34.859669  345537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/newest-cni-126107/id_rsa Username:docker}
	I1210 06:25:34.953162  345537 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 06:25:34.958380  345537 fix.go:56] duration metric: took 4.654596627s for fixHost
	I1210 06:25:34.958415  345537 start.go:83] releasing machines lock for "newest-cni-126107", held for 4.654651631s
	I1210 06:25:34.958495  345537 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-126107
	I1210 06:25:34.980057  345537 ssh_runner.go:195] Run: cat /version.json
	I1210 06:25:34.980079  345537 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 06:25:34.980145  345537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:34.980146  345537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:35.002231  345537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/newest-cni-126107/id_rsa Username:docker}
	I1210 06:25:35.002423  345537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/newest-cni-126107/id_rsa Username:docker}
	I1210 06:25:35.094904  345537 ssh_runner.go:195] Run: systemctl --version
	I1210 06:25:35.153916  345537 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 06:25:35.191258  345537 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 06:25:35.196136  345537 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 06:25:35.196197  345537 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 06:25:35.204676  345537 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 06:25:35.204704  345537 start.go:496] detecting cgroup driver to use...
	I1210 06:25:35.204735  345537 detect.go:190] detected "systemd" cgroup driver on host os
	I1210 06:25:35.204795  345537 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 06:25:35.220331  345537 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 06:25:35.233476  345537 docker.go:218] disabling cri-docker service (if available) ...
	I1210 06:25:35.233536  345537 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 06:25:35.248932  345537 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 06:25:35.263006  345537 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 06:25:35.344446  345537 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 06:25:35.426091  345537 docker.go:234] disabling docker service ...
	I1210 06:25:35.426167  345537 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 06:25:35.440762  345537 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 06:25:35.453694  345537 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 06:25:35.544590  345537 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 06:25:35.623824  345537 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 06:25:35.636961  345537 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:25:35.651831  345537 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 06:25:35.651879  345537 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:25:35.661164  345537 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1210 06:25:35.661233  345537 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:25:35.670965  345537 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:25:35.681369  345537 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:25:35.691670  345537 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 06:25:35.702453  345537 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:25:35.712207  345537 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:25:35.722297  345537 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:25:35.731324  345537 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 06:25:35.740103  345537 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 06:25:35.748317  345537 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:25:35.839721  345537 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 06:25:35.979010  345537 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 06:25:35.979076  345537 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 06:25:35.983132  345537 start.go:564] Will wait 60s for crictl version
	I1210 06:25:35.983199  345537 ssh_runner.go:195] Run: which crictl
	I1210 06:25:35.986794  345537 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 06:25:36.012672  345537 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 06:25:36.012774  345537 ssh_runner.go:195] Run: crio --version
	I1210 06:25:36.047570  345537 ssh_runner.go:195] Run: crio --version
	I1210 06:25:36.081122  345537 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1210 06:25:36.085692  345537 cli_runner.go:164] Run: docker network inspect newest-cni-126107 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:25:36.104980  345537 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1210 06:25:36.109299  345537 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:25:36.123029  345537 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	
	
	==> CRI-O <==
	Dec 10 06:24:52 default-k8s-diff-port-643991 crio[571]: time="2025-12-10T06:24:52.734316391Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 10 06:24:52 default-k8s-diff-port-643991 crio[571]: time="2025-12-10T06:24:52.738207346Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 10 06:24:52 default-k8s-diff-port-643991 crio[571]: time="2025-12-10T06:24:52.738231424Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 10 06:25:07 default-k8s-diff-port-643991 crio[571]: time="2025-12-10T06:25:07.881657326Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=aef6e236-85da-48a1-a8da-e065f9fcfce3 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:25:07 default-k8s-diff-port-643991 crio[571]: time="2025-12-10T06:25:07.884988194Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=c49197bc-590a-4671-8f29-2ed7c2384b9a name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:25:07 default-k8s-diff-port-643991 crio[571]: time="2025-12-10T06:25:07.888354957Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sqs8v/dashboard-metrics-scraper" id=dbfd6f11-deab-4da5-b00c-b650d4220805 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:25:07 default-k8s-diff-port-643991 crio[571]: time="2025-12-10T06:25:07.888532538Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:25:07 default-k8s-diff-port-643991 crio[571]: time="2025-12-10T06:25:07.89507295Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:25:07 default-k8s-diff-port-643991 crio[571]: time="2025-12-10T06:25:07.895638703Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:25:07 default-k8s-diff-port-643991 crio[571]: time="2025-12-10T06:25:07.935098253Z" level=info msg="Created container b4a88d6bb8c02020a551d52825b702426f9b22f824c2ca79978cc5ac8b432bfe: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sqs8v/dashboard-metrics-scraper" id=dbfd6f11-deab-4da5-b00c-b650d4220805 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:25:07 default-k8s-diff-port-643991 crio[571]: time="2025-12-10T06:25:07.93585391Z" level=info msg="Starting container: b4a88d6bb8c02020a551d52825b702426f9b22f824c2ca79978cc5ac8b432bfe" id=a63b51b2-e550-419c-acef-3c1339f6f631 name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 06:25:07 default-k8s-diff-port-643991 crio[571]: time="2025-12-10T06:25:07.938442827Z" level=info msg="Started container" PID=1779 containerID=b4a88d6bb8c02020a551d52825b702426f9b22f824c2ca79978cc5ac8b432bfe description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sqs8v/dashboard-metrics-scraper id=a63b51b2-e550-419c-acef-3c1339f6f631 name=/runtime.v1.RuntimeService/StartContainer sandboxID=872e55770732aa127aaf222fed14f4c77f04cd5c08d947cdec44cf88aee20465
	Dec 10 06:25:08 default-k8s-diff-port-643991 crio[571]: time="2025-12-10T06:25:08.009172219Z" level=info msg="Removing container: 339ff21d2dfe2e75387928abe4db6fd05d843df487685d1886c8728bbdb380c8" id=be4998b3-2560-4918-bbee-dea8844dab27 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 10 06:25:08 default-k8s-diff-port-643991 crio[571]: time="2025-12-10T06:25:08.020378611Z" level=info msg="Removed container 339ff21d2dfe2e75387928abe4db6fd05d843df487685d1886c8728bbdb380c8: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sqs8v/dashboard-metrics-scraper" id=be4998b3-2560-4918-bbee-dea8844dab27 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 10 06:25:13 default-k8s-diff-port-643991 crio[571]: time="2025-12-10T06:25:13.024408777Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=942903a7-08dc-4a22-a5bc-45c630cbd537 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:25:13 default-k8s-diff-port-643991 crio[571]: time="2025-12-10T06:25:13.025546355Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=e6bc6146-881a-4dfe-b9eb-d239312249af name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:25:13 default-k8s-diff-port-643991 crio[571]: time="2025-12-10T06:25:13.026723318Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=b3196b3e-e032-4f6e-a8f2-75babe5a8989 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:25:13 default-k8s-diff-port-643991 crio[571]: time="2025-12-10T06:25:13.02686167Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:25:13 default-k8s-diff-port-643991 crio[571]: time="2025-12-10T06:25:13.031262228Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:25:13 default-k8s-diff-port-643991 crio[571]: time="2025-12-10T06:25:13.031454195Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/8eeceb616a3fb87fa178f86183db3b3d874408a8a028f2536f830e4edfbf4bb4/merged/etc/passwd: no such file or directory"
	Dec 10 06:25:13 default-k8s-diff-port-643991 crio[571]: time="2025-12-10T06:25:13.031499158Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/8eeceb616a3fb87fa178f86183db3b3d874408a8a028f2536f830e4edfbf4bb4/merged/etc/group: no such file or directory"
	Dec 10 06:25:13 default-k8s-diff-port-643991 crio[571]: time="2025-12-10T06:25:13.031785103Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:25:13 default-k8s-diff-port-643991 crio[571]: time="2025-12-10T06:25:13.066163925Z" level=info msg="Created container 86d39ffede9e7dae753d54daef1a4829b9a1da4b5187509b5e41ec4e0a49ad66: kube-system/storage-provisioner/storage-provisioner" id=b3196b3e-e032-4f6e-a8f2-75babe5a8989 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:25:13 default-k8s-diff-port-643991 crio[571]: time="2025-12-10T06:25:13.066852198Z" level=info msg="Starting container: 86d39ffede9e7dae753d54daef1a4829b9a1da4b5187509b5e41ec4e0a49ad66" id=325639e9-8eea-4b73-bcae-18794aa7f431 name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 06:25:13 default-k8s-diff-port-643991 crio[571]: time="2025-12-10T06:25:13.069216405Z" level=info msg="Started container" PID=1797 containerID=86d39ffede9e7dae753d54daef1a4829b9a1da4b5187509b5e41ec4e0a49ad66 description=kube-system/storage-provisioner/storage-provisioner id=325639e9-8eea-4b73-bcae-18794aa7f431 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8d503f6924a50a4effa8ce0b593441298bd1f2210180cd8bb93dee5c3055d825
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	86d39ffede9e7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           23 seconds ago      Running             storage-provisioner         1                   8d503f6924a50       storage-provisioner                                    kube-system
	b4a88d6bb8c02       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           28 seconds ago      Exited              dashboard-metrics-scraper   2                   872e55770732a       dashboard-metrics-scraper-6ffb444bf9-sqs8v             kubernetes-dashboard
	422df2b0f39e0       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   48 seconds ago      Running             kubernetes-dashboard        0                   ad7f6c4e7fe6b       kubernetes-dashboard-855c9754f9-llkbc                  kubernetes-dashboard
	0d4cc9c331feb       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           54 seconds ago      Running             busybox                     1                   fc1b26dd40cc4       busybox                                                default
	bbaf4ecac9fe9       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           54 seconds ago      Running             coredns                     0                   fede07f270728       coredns-66bc5c9577-znsz6                               kube-system
	08a9a203d5bf2       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           54 seconds ago      Running             kindnet-cni                 0                   7cb9b8bf9b4cc       kindnet-7j6ns                                          kube-system
	6c44f81745b50       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           54 seconds ago      Exited              storage-provisioner         0                   8d503f6924a50       storage-provisioner                                    kube-system
	fbe2e6e498d24       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                           54 seconds ago      Running             kube-proxy                  0                   82256b58103e3       kube-proxy-mkpzc                                       kube-system
	8cb208db60562       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                           57 seconds ago      Running             kube-apiserver              0                   1707693fe89c9       kube-apiserver-default-k8s-diff-port-643991            kube-system
	939f270b7e908       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                           57 seconds ago      Running             kube-controller-manager     0                   df9d09efbd2cf       kube-controller-manager-default-k8s-diff-port-643991   kube-system
	9b258fc04f844       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                           57 seconds ago      Running             kube-scheduler              0                   0f86fbd80a599       kube-scheduler-default-k8s-diff-port-643991            kube-system
	e3522bb390040       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           57 seconds ago      Running             etcd                        0                   29a45a9cede7c       etcd-default-k8s-diff-port-643991                      kube-system
	
	
	==> coredns [bbaf4ecac9fe942940a527bc2c9d5ac2c8852370073f22a48aa6290e24b8f4fc] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39326 - 41500 "HINFO IN 2741511875219942973.7423051502085985891. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.054343815s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-643991
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-643991
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=edc6abd3c0573b88c7a02dc35aa0b985627fa3e9
	                    minikube.k8s.io/name=default-k8s-diff-port-643991
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T06_23_41_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 06:23:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-643991
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 06:25:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 06:25:12 +0000   Wed, 10 Dec 2025 06:23:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 06:25:12 +0000   Wed, 10 Dec 2025 06:23:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 06:25:12 +0000   Wed, 10 Dec 2025 06:23:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Dec 2025 06:25:12 +0000   Wed, 10 Dec 2025 06:23:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-643991
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 0992b7e47f4f804d2f02c3066938a460
	  System UUID:                dd07e36d-8369-41a9-8fa1-68f38e5abb55
	  Boot ID:                    cce7104c-1270-4b6b-af66-b04ce0de633c
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 coredns-66bc5c9577-znsz6                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     110s
	  kube-system                 etcd-default-k8s-diff-port-643991                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         116s
	  kube-system                 kindnet-7j6ns                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      111s
	  kube-system                 kube-apiserver-default-k8s-diff-port-643991             250m (3%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-643991    200m (2%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-proxy-mkpzc                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-scheduler-default-k8s-diff-port-643991             100m (1%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-sqs8v              0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-llkbc                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 109s               kube-proxy       
	  Normal  Starting                 54s                kube-proxy       
	  Normal  NodeHasSufficientMemory  116s               kubelet          Node default-k8s-diff-port-643991 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    116s               kubelet          Node default-k8s-diff-port-643991 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     116s               kubelet          Node default-k8s-diff-port-643991 status is now: NodeHasSufficientPID
	  Normal  Starting                 116s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           112s               node-controller  Node default-k8s-diff-port-643991 event: Registered Node default-k8s-diff-port-643991 in Controller
	  Normal  NodeReady                99s                kubelet          Node default-k8s-diff-port-643991 status is now: NodeReady
	  Normal  Starting                 58s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  58s (x8 over 58s)  kubelet          Node default-k8s-diff-port-643991 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s (x8 over 58s)  kubelet          Node default-k8s-diff-port-643991 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s (x8 over 58s)  kubelet          Node default-k8s-diff-port-643991 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           52s                node-controller  Node default-k8s-diff-port-643991 event: Registered Node default-k8s-diff-port-643991 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 67 21 4d 5d 12 08 06
	[Dec10 06:21] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 7e b1 cc cb 4a c1 08 06
	[  +0.000451] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f6 67 21 4d 5d 12 08 06
	[ +47.984386] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 4e cf 84 e2 64 08 06
	[  +1.136322] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000001] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0e cf a5 c8 c4 7c 08 06
	[  +0.000001] ll header: 00000000: ff ff ff ff ff ff 4e 6d f9 62 31 99 08 06
	[Dec10 06:22] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 16 a6 95 b8 c2 8a 08 06
	[ +10.598490] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 42 35 90 e5 6e e9 08 06
	[  +0.000401] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 36 4e cf 84 e2 64 08 06
	[ +28.872835] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a 53 b5 51 38 03 08 06
	[  +0.000413] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4e 6d f9 62 31 99 08 06
	[  +9.820727] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6e c5 0b 85 ba 10 08 06
	[  +0.000485] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 16 a6 95 b8 c2 8a 08 06
	
	
	==> etcd [e3522bb390040c1d32dccb4cfcacd9939770bc3064f9bb9dac4051ec77431f13] <==
	{"level":"warn","ts":"2025-12-10T06:24:40.599232Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:40.607786Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:40.616767Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:40.623342Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:40.630987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:40.639538Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:40.646963Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:40.656660Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:40.664544Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:40.671873Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:40.680758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:40.688223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:40.696282Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:40.703869Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:40.713057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:40.724052Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:40.732788Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:40.740244Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:40.747592Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:40.763030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:40.770147Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:40.777674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:40.833768Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:25:02.623537Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"124.191332ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638357210171067668 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.76.2\" mod_revision:590 > success:<request_put:<key:\"/registry/masterleases/192.168.76.2\" value_size:65 lease:6414985173316291858 >> failure:<request_range:<key:\"/registry/masterleases/192.168.76.2\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-12-10T06:25:02.623670Z","caller":"traceutil/trace.go:172","msg":"trace[1910628044] transaction","detail":"{read_only:false; response_revision:604; number_of_response:1; }","duration":"129.299009ms","start":"2025-12-10T06:25:02.494358Z","end":"2025-12-10T06:25:02.623657Z","steps":["trace[1910628044] 'compare'  (duration: 124.055635ms)"],"step_count":1}
	
	
	==> kernel <==
	 06:25:36 up  1:08,  0 user,  load average: 4.53, 4.77, 3.06
	Linux default-k8s-diff-port-643991 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [08a9a203d5bf21ac5648ef3fb7638884576e075da5e173791b5158e58f55d0a9] <==
	I1210 06:24:42.414319       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1210 06:24:42.508884       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1210 06:24:42.509065       1 main.go:148] setting mtu 1500 for CNI 
	I1210 06:24:42.509084       1 main.go:178] kindnetd IP family: "ipv4"
	I1210 06:24:42.509125       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-10T06:24:42Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1210 06:24:42.712625       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1210 06:24:42.712682       1 controller.go:381] "Waiting for informer caches to sync"
	I1210 06:24:42.712696       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1210 06:24:42.810104       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1210 06:24:43.209034       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1210 06:24:43.209083       1 metrics.go:72] Registering metrics
	I1210 06:24:43.209551       1 controller.go:711] "Syncing nftables rules"
	I1210 06:24:52.712446       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1210 06:24:52.712552       1 main.go:301] handling current node
	I1210 06:25:02.713237       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1210 06:25:02.713298       1 main.go:301] handling current node
	I1210 06:25:12.713040       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1210 06:25:12.713124       1 main.go:301] handling current node
	I1210 06:25:22.713622       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1210 06:25:22.713661       1 main.go:301] handling current node
	I1210 06:25:32.713842       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1210 06:25:32.713885       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8cb208db605620bb50e399feca07150e2a59edcd3b1bef56613bc9bf58d33577] <==
	I1210 06:24:41.344201       1 aggregator.go:171] initial CRD sync complete...
	I1210 06:24:41.344219       1 autoregister_controller.go:144] Starting autoregister controller
	I1210 06:24:41.344227       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1210 06:24:41.344234       1 cache.go:39] Caches are synced for autoregister controller
	I1210 06:24:41.347141       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1210 06:24:41.347352       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	E1210 06:24:41.350155       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1210 06:24:41.355105       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1210 06:24:41.358771       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1210 06:24:41.358785       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1210 06:24:41.358810       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1210 06:24:41.365312       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1210 06:24:41.379365       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1210 06:24:41.639392       1 controller.go:667] quota admission added evaluator for: namespaces
	I1210 06:24:41.675511       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1210 06:24:41.697889       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1210 06:24:41.710438       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1210 06:24:41.718851       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1210 06:24:41.763599       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.2.59"}
	I1210 06:24:41.776138       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.168.2"}
	I1210 06:24:42.247581       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1210 06:24:44.798527       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1210 06:24:45.099166       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1210 06:24:45.099166       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1210 06:24:45.148447       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [939f270b7e90898a7f21a52e2572b0814d28cd556fbbc16d377a84363bcff231] <==
	I1210 06:24:44.645904       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1210 06:24:44.645983       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1210 06:24:44.651205       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1210 06:24:44.651232       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1210 06:24:44.652408       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 06:24:44.653596       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1210 06:24:44.675868       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1210 06:24:44.694603       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1210 06:24:44.694629       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1210 06:24:44.694644       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1210 06:24:44.694777       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1210 06:24:44.695841       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1210 06:24:44.695870       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1210 06:24:44.695890       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1210 06:24:44.695926       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1210 06:24:44.695932       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1210 06:24:44.695931       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1210 06:24:44.697769       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1210 06:24:44.697787       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1210 06:24:44.697795       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1210 06:24:44.700105       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1210 06:24:44.700237       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 06:24:44.702344       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1210 06:24:44.704614       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1210 06:24:44.718980       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [fbe2e6e498d241f7b238abce28affa8ff3c952a61083e0b428a07c7fc0bac104] <==
	I1210 06:24:42.257266       1 server_linux.go:53] "Using iptables proxy"
	I1210 06:24:42.330681       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1210 06:24:42.431553       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1210 06:24:42.431601       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1210 06:24:42.431733       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 06:24:42.453767       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1210 06:24:42.453832       1 server_linux.go:132] "Using iptables Proxier"
	I1210 06:24:42.460227       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 06:24:42.460710       1 server.go:527] "Version info" version="v1.34.2"
	I1210 06:24:42.460750       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 06:24:42.462267       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 06:24:42.462292       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 06:24:42.462290       1 config.go:200] "Starting service config controller"
	I1210 06:24:42.463035       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 06:24:42.463142       1 config.go:106] "Starting endpoint slice config controller"
	I1210 06:24:42.463156       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 06:24:42.463553       1 config.go:309] "Starting node config controller"
	I1210 06:24:42.464374       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 06:24:42.464400       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 06:24:42.563210       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1210 06:24:42.563236       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1210 06:24:42.563278       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [9b258fc04f844289ade513f0963c9827dce6e9c67835e2e2ffc484b28ca58cb9] <==
	I1210 06:24:41.150833       1 serving.go:386] Generated self-signed cert in-memory
	I1210 06:24:41.749030       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1210 06:24:41.749065       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 06:24:41.753546       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 06:24:41.753563       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1210 06:24:41.753586       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 06:24:41.753598       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1210 06:24:41.753554       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1210 06:24:41.753651       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1210 06:24:41.754032       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1210 06:24:41.754099       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1210 06:24:41.854725       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1210 06:24:41.854829       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1210 06:24:41.854756       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 10 06:24:45 default-k8s-diff-port-643991 kubelet[731]: I1210 06:24:45.403577     731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/390a0b83-fd9c-42b8-8732-362bbb3a7e9a-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-llkbc\" (UID: \"390a0b83-fd9c-42b8-8732-362bbb3a7e9a\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-llkbc"
	Dec 10 06:24:45 default-k8s-diff-port-643991 kubelet[731]: I1210 06:24:45.403598     731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bq2lh\" (UniqueName: \"kubernetes.io/projected/390a0b83-fd9c-42b8-8732-362bbb3a7e9a-kube-api-access-bq2lh\") pod \"kubernetes-dashboard-855c9754f9-llkbc\" (UID: \"390a0b83-fd9c-42b8-8732-362bbb3a7e9a\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-llkbc"
	Dec 10 06:24:49 default-k8s-diff-port-643991 kubelet[731]: I1210 06:24:49.964875     731 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 10 06:24:50 default-k8s-diff-port-643991 kubelet[731]: I1210 06:24:50.958271     731 scope.go:117] "RemoveContainer" containerID="ce49e56c5b17b1cf6a478d5d972b0f65514ace4dcb01b9966653837a659acb5d"
	Dec 10 06:24:50 default-k8s-diff-port-643991 kubelet[731]: I1210 06:24:50.968194     731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-llkbc" podStartSLOduration=3.259869578 podStartE2EDuration="5.968169656s" podCreationTimestamp="2025-12-10 06:24:45 +0000 UTC" firstStartedPulling="2025-12-10 06:24:45.640752776 +0000 UTC m=+6.881605108" lastFinishedPulling="2025-12-10 06:24:48.349052873 +0000 UTC m=+9.589905186" observedRunningTime="2025-12-10 06:24:48.983409641 +0000 UTC m=+10.224261974" watchObservedRunningTime="2025-12-10 06:24:50.968169656 +0000 UTC m=+12.209021988"
	Dec 10 06:24:51 default-k8s-diff-port-643991 kubelet[731]: I1210 06:24:51.963567     731 scope.go:117] "RemoveContainer" containerID="ce49e56c5b17b1cf6a478d5d972b0f65514ace4dcb01b9966653837a659acb5d"
	Dec 10 06:24:51 default-k8s-diff-port-643991 kubelet[731]: I1210 06:24:51.963920     731 scope.go:117] "RemoveContainer" containerID="339ff21d2dfe2e75387928abe4db6fd05d843df487685d1886c8728bbdb380c8"
	Dec 10 06:24:51 default-k8s-diff-port-643991 kubelet[731]: E1210 06:24:51.964108     731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-sqs8v_kubernetes-dashboard(be53b760-db92-41c0-afde-2722161bed6a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sqs8v" podUID="be53b760-db92-41c0-afde-2722161bed6a"
	Dec 10 06:24:52 default-k8s-diff-port-643991 kubelet[731]: I1210 06:24:52.970012     731 scope.go:117] "RemoveContainer" containerID="339ff21d2dfe2e75387928abe4db6fd05d843df487685d1886c8728bbdb380c8"
	Dec 10 06:24:52 default-k8s-diff-port-643991 kubelet[731]: E1210 06:24:52.970186     731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-sqs8v_kubernetes-dashboard(be53b760-db92-41c0-afde-2722161bed6a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sqs8v" podUID="be53b760-db92-41c0-afde-2722161bed6a"
	Dec 10 06:24:55 default-k8s-diff-port-643991 kubelet[731]: I1210 06:24:55.669509     731 scope.go:117] "RemoveContainer" containerID="339ff21d2dfe2e75387928abe4db6fd05d843df487685d1886c8728bbdb380c8"
	Dec 10 06:24:55 default-k8s-diff-port-643991 kubelet[731]: E1210 06:24:55.669697     731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-sqs8v_kubernetes-dashboard(be53b760-db92-41c0-afde-2722161bed6a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sqs8v" podUID="be53b760-db92-41c0-afde-2722161bed6a"
	Dec 10 06:25:07 default-k8s-diff-port-643991 kubelet[731]: I1210 06:25:07.881072     731 scope.go:117] "RemoveContainer" containerID="339ff21d2dfe2e75387928abe4db6fd05d843df487685d1886c8728bbdb380c8"
	Dec 10 06:25:08 default-k8s-diff-port-643991 kubelet[731]: I1210 06:25:08.007794     731 scope.go:117] "RemoveContainer" containerID="339ff21d2dfe2e75387928abe4db6fd05d843df487685d1886c8728bbdb380c8"
	Dec 10 06:25:08 default-k8s-diff-port-643991 kubelet[731]: I1210 06:25:08.008013     731 scope.go:117] "RemoveContainer" containerID="b4a88d6bb8c02020a551d52825b702426f9b22f824c2ca79978cc5ac8b432bfe"
	Dec 10 06:25:08 default-k8s-diff-port-643991 kubelet[731]: E1210 06:25:08.008262     731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-sqs8v_kubernetes-dashboard(be53b760-db92-41c0-afde-2722161bed6a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sqs8v" podUID="be53b760-db92-41c0-afde-2722161bed6a"
	Dec 10 06:25:13 default-k8s-diff-port-643991 kubelet[731]: I1210 06:25:13.024015     731 scope.go:117] "RemoveContainer" containerID="6c44f81745b509fdff07279555f62e28970767ad905fb283b7ad65af0a2c26ad"
	Dec 10 06:25:15 default-k8s-diff-port-643991 kubelet[731]: I1210 06:25:15.669759     731 scope.go:117] "RemoveContainer" containerID="b4a88d6bb8c02020a551d52825b702426f9b22f824c2ca79978cc5ac8b432bfe"
	Dec 10 06:25:15 default-k8s-diff-port-643991 kubelet[731]: E1210 06:25:15.670367     731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-sqs8v_kubernetes-dashboard(be53b760-db92-41c0-afde-2722161bed6a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sqs8v" podUID="be53b760-db92-41c0-afde-2722161bed6a"
	Dec 10 06:25:25 default-k8s-diff-port-643991 kubelet[731]: I1210 06:25:25.881700     731 scope.go:117] "RemoveContainer" containerID="b4a88d6bb8c02020a551d52825b702426f9b22f824c2ca79978cc5ac8b432bfe"
	Dec 10 06:25:25 default-k8s-diff-port-643991 kubelet[731]: E1210 06:25:25.881902     731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-sqs8v_kubernetes-dashboard(be53b760-db92-41c0-afde-2722161bed6a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sqs8v" podUID="be53b760-db92-41c0-afde-2722161bed6a"
	Dec 10 06:25:33 default-k8s-diff-port-643991 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 10 06:25:33 default-k8s-diff-port-643991 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 10 06:25:33 default-k8s-diff-port-643991 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:25:33 default-k8s-diff-port-643991 systemd[1]: kubelet.service: Consumed 1.844s CPU time.
	
	
	==> kubernetes-dashboard [422df2b0f39e08d2d4c6e2bf49639bf603b80a30c5394dde35005c632322f85f] <==
	2025/12/10 06:24:48 Starting overwatch
	2025/12/10 06:24:48 Using namespace: kubernetes-dashboard
	2025/12/10 06:24:48 Using in-cluster config to connect to apiserver
	2025/12/10 06:24:48 Using secret token for csrf signing
	2025/12/10 06:24:48 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/10 06:24:48 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/10 06:24:48 Successful initial request to the apiserver, version: v1.34.2
	2025/12/10 06:24:48 Generating JWE encryption key
	2025/12/10 06:24:48 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/10 06:24:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/10 06:24:48 Initializing JWE encryption key from synchronized object
	2025/12/10 06:24:48 Creating in-cluster Sidecar client
	2025/12/10 06:24:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/10 06:24:48 Serving insecurely on HTTP port: 9090
	2025/12/10 06:25:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [6c44f81745b509fdff07279555f62e28970767ad905fb283b7ad65af0a2c26ad] <==
	I1210 06:24:42.224021       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1210 06:25:12.228379       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [86d39ffede9e7dae753d54daef1a4829b9a1da4b5187509b5e41ec4e0a49ad66] <==
	I1210 06:25:13.082536       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1210 06:25:13.090966       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1210 06:25:13.091019       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1210 06:25:13.093867       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:25:16.549401       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:25:20.810250       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:25:24.409159       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:25:27.462952       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:25:30.485898       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:25:30.490665       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1210 06:25:30.490826       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1210 06:25:30.490983       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-643991_1bc24be0-11c6-4228-a920-d5f1cc758d90!
	I1210 06:25:30.490987       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ed6e8b9e-41cf-4e31-adb7-3192df14d1bf", APIVersion:"v1", ResourceVersion:"631", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-643991_1bc24be0-11c6-4228-a920-d5f1cc758d90 became leader
	W1210 06:25:30.492899       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:25:30.496238       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1210 06:25:30.591529       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-643991_1bc24be0-11c6-4228-a920-d5f1cc758d90!
	W1210 06:25:32.499723       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:25:32.503961       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:25:34.507845       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:25:34.513514       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:25:36.519605       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:25:36.526936       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-643991 -n default-k8s-diff-port-643991
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-643991 -n default-k8s-diff-port-643991: exit status 2 (396.236127ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-643991 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-643991
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-643991:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "acbf5c836807542e08b70cb1897e4c8cb6cabdd645d3167a86ed0db13940e484",
	        "Created": "2025-12-10T06:23:22.165163212Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 331400,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T06:24:29.714169609Z",
	            "FinishedAt": "2025-12-10T06:24:28.627745692Z"
	        },
	        "Image": "sha256:9dfcc37acf4d8ed51daae49d651516447e95ced4bb0b0783e8c53cb79a74f008",
	        "ResolvConfPath": "/var/lib/docker/containers/acbf5c836807542e08b70cb1897e4c8cb6cabdd645d3167a86ed0db13940e484/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/acbf5c836807542e08b70cb1897e4c8cb6cabdd645d3167a86ed0db13940e484/hostname",
	        "HostsPath": "/var/lib/docker/containers/acbf5c836807542e08b70cb1897e4c8cb6cabdd645d3167a86ed0db13940e484/hosts",
	        "LogPath": "/var/lib/docker/containers/acbf5c836807542e08b70cb1897e4c8cb6cabdd645d3167a86ed0db13940e484/acbf5c836807542e08b70cb1897e4c8cb6cabdd645d3167a86ed0db13940e484-json.log",
	        "Name": "/default-k8s-diff-port-643991",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-643991:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-643991",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "acbf5c836807542e08b70cb1897e4c8cb6cabdd645d3167a86ed0db13940e484",
	                "LowerDir": "/var/lib/docker/overlay2/cf1f161019268f5442645519aa310b9ea0a75bd69c7663b67c3505eec1791fb3-init/diff:/var/lib/docker/overlay2/5745aee6e8b05b3a4cc4ad6aee891df9d6438d830895f70bd2a764a976802708/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cf1f161019268f5442645519aa310b9ea0a75bd69c7663b67c3505eec1791fb3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cf1f161019268f5442645519aa310b9ea0a75bd69c7663b67c3505eec1791fb3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cf1f161019268f5442645519aa310b9ea0a75bd69c7663b67c3505eec1791fb3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-643991",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-643991/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-643991",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-643991",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-643991",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "ced51ff946f764b9ea868e0ca91b0881af554e482a4812176dbdfa8ace024bd9",
	            "SandboxKey": "/var/run/docker/netns/ced51ff946f7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33130"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33131"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33132"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-643991": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0a24a8ad90ffaf1aa41a72e3c38eeed58406686f85b6ce46090a934c6571e421",
	                    "EndpointID": "65c176ec3f15ee6d6e3e50c7463c9927f468423199a9523673581821eed10f75",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "12:48:ad:96:f6:c0",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-643991",
	                        "acbf5c836807"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-643991 -n default-k8s-diff-port-643991
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-643991 -n default-k8s-diff-port-643991: exit status 2 (367.958822ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-643991 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-643991 logs -n 25: (1.329678055s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ addons  │ enable dashboard -p no-preload-713838 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-713838            │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:24 UTC │
	│ start   │ -p no-preload-713838 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-713838            │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:25 UTC │
	│ addons  │ enable dashboard -p embed-certs-133470 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-133470           │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:24 UTC │
	│ start   │ -p embed-certs-133470 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-133470           │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:25 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-643991 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-643991 │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:24 UTC │
	│ start   │ -p default-k8s-diff-port-643991 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-643991 │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:25 UTC │
	│ image   │ old-k8s-version-424086 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-424086       │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:24 UTC │
	│ pause   │ -p old-k8s-version-424086 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-424086       │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │                     │
	│ delete  │ -p old-k8s-version-424086                                                                                                                                                                                                                            │ old-k8s-version-424086       │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:24 UTC │
	│ delete  │ -p old-k8s-version-424086                                                                                                                                                                                                                            │ old-k8s-version-424086       │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:24 UTC │
	│ start   │ -p newest-cni-126107 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-126107            │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:25 UTC │
	│ image   │ no-preload-713838 image list --format=json                                                                                                                                                                                                           │ no-preload-713838            │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │ 10 Dec 25 06:25 UTC │
	│ pause   │ -p no-preload-713838 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-713838            │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │                     │
	│ delete  │ -p no-preload-713838                                                                                                                                                                                                                                 │ no-preload-713838            │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │ 10 Dec 25 06:25 UTC │
	│ image   │ embed-certs-133470 image list --format=json                                                                                                                                                                                                          │ embed-certs-133470           │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │ 10 Dec 25 06:25 UTC │
	│ pause   │ -p embed-certs-133470 --alsologtostderr -v=1                                                                                                                                                                                                         │ embed-certs-133470           │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │                     │
	│ delete  │ -p no-preload-713838                                                                                                                                                                                                                                 │ no-preload-713838            │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │ 10 Dec 25 06:25 UTC │
	│ addons  │ enable metrics-server -p newest-cni-126107 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-126107            │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │                     │
	│ delete  │ -p embed-certs-133470                                                                                                                                                                                                                                │ embed-certs-133470           │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │ 10 Dec 25 06:25 UTC │
	│ stop    │ -p newest-cni-126107 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-126107            │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │ 10 Dec 25 06:25 UTC │
	│ delete  │ -p embed-certs-133470                                                                                                                                                                                                                                │ embed-certs-133470           │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │ 10 Dec 25 06:25 UTC │
	│ addons  │ enable dashboard -p newest-cni-126107 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-126107            │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │ 10 Dec 25 06:25 UTC │
	│ start   │ -p newest-cni-126107 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-126107            │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │                     │
	│ image   │ default-k8s-diff-port-643991 image list --format=json                                                                                                                                                                                                │ default-k8s-diff-port-643991 │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │ 10 Dec 25 06:25 UTC │
	│ pause   │ -p default-k8s-diff-port-643991 --alsologtostderr -v=1                                                                                                                                                                                               │ default-k8s-diff-port-643991 │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:25:30
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:25:30.109871  345537 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:25:30.110102  345537 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:25:30.110112  345537 out.go:374] Setting ErrFile to fd 2...
	I1210 06:25:30.110116  345537 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:25:30.110304  345537 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8832/.minikube/bin
	I1210 06:25:30.110756  345537 out.go:368] Setting JSON to false
	I1210 06:25:30.111768  345537 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4081,"bootTime":1765343849,"procs":254,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 06:25:30.111827  345537 start.go:143] virtualization: kvm guest
	I1210 06:25:30.113927  345537 out.go:179] * [newest-cni-126107] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 06:25:30.115752  345537 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 06:25:30.115752  345537 notify.go:221] Checking for updates...
	I1210 06:25:30.118522  345537 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:25:30.119763  345537 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-8832/kubeconfig
	I1210 06:25:30.121229  345537 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-8832/.minikube
	I1210 06:25:30.122829  345537 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 06:25:30.124211  345537 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:25:30.126209  345537 config.go:182] Loaded profile config "newest-cni-126107": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 06:25:30.126830  345537 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:25:30.151836  345537 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 06:25:30.151928  345537 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:25:30.210924  345537 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:false NGoroutines:54 SystemTime:2025-12-10 06:25:30.200725078 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:25:30.211055  345537 docker.go:319] overlay module found
	I1210 06:25:30.212977  345537 out.go:179] * Using the docker driver based on existing profile
	I1210 06:25:30.214243  345537 start.go:309] selected driver: docker
	I1210 06:25:30.214258  345537 start.go:927] validating driver "docker" against &{Name:newest-cni-126107 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-126107 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:25:30.214369  345537 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:25:30.215062  345537 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:25:30.276019  345537 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:false NGoroutines:54 SystemTime:2025-12-10 06:25:30.266281878 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:25:30.276342  345537 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1210 06:25:30.276370  345537 cni.go:84] Creating CNI manager for ""
	I1210 06:25:30.276425  345537 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:25:30.276460  345537 start.go:353] cluster config:
	{Name:newest-cni-126107 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-126107 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:25:30.278593  345537 out.go:179] * Starting "newest-cni-126107" primary control-plane node in "newest-cni-126107" cluster
	I1210 06:25:30.279972  345537 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 06:25:30.281412  345537 out.go:179] * Pulling base image v0.0.48-1765319469-22089 ...
	I1210 06:25:30.282704  345537 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1210 06:25:30.282744  345537 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-8832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1210 06:25:30.282754  345537 cache.go:65] Caching tarball of preloaded images
	I1210 06:25:30.282808  345537 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon
	I1210 06:25:30.282846  345537 preload.go:238] Found /home/jenkins/minikube-integration/22089-8832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 06:25:30.282857  345537 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1210 06:25:30.282949  345537 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/config.json ...
	I1210 06:25:30.303637  345537 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon, skipping pull
	I1210 06:25:30.303656  345537 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca exists in daemon, skipping load
	I1210 06:25:30.303670  345537 cache.go:243] Successfully downloaded all kic artifacts
	I1210 06:25:30.303700  345537 start.go:360] acquireMachinesLock for newest-cni-126107: {Name:mk95835e60131d01841dcfa433d5776bf10a491c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:25:30.303753  345537 start.go:364] duration metric: took 36.893µs to acquireMachinesLock for "newest-cni-126107"
	I1210 06:25:30.303770  345537 start.go:96] Skipping create...Using existing machine configuration
	I1210 06:25:30.303776  345537 fix.go:54] fixHost starting: 
	I1210 06:25:30.303978  345537 cli_runner.go:164] Run: docker container inspect newest-cni-126107 --format={{.State.Status}}
	I1210 06:25:30.322589  345537 fix.go:112] recreateIfNeeded on newest-cni-126107: state=Stopped err=<nil>
	W1210 06:25:30.322625  345537 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 06:25:30.324708  345537 out.go:252] * Restarting existing docker container for "newest-cni-126107" ...
	I1210 06:25:30.324786  345537 cli_runner.go:164] Run: docker start newest-cni-126107
	I1210 06:25:30.586048  345537 cli_runner.go:164] Run: docker container inspect newest-cni-126107 --format={{.State.Status}}
	I1210 06:25:30.606349  345537 kic.go:430] container "newest-cni-126107" state is running.
	I1210 06:25:30.606765  345537 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-126107
	I1210 06:25:30.626578  345537 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/config.json ...
	I1210 06:25:30.626856  345537 machine.go:94] provisionDockerMachine start ...
	I1210 06:25:30.626926  345537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:30.645878  345537 main.go:143] libmachine: Using SSH client type: native
	I1210 06:25:30.646136  345537 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I1210 06:25:30.646149  345537 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 06:25:30.646758  345537 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53190->127.0.0.1:33139: read: connection reset by peer
	I1210 06:25:33.780525  345537 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-126107
	
	I1210 06:25:33.780558  345537 ubuntu.go:182] provisioning hostname "newest-cni-126107"
	I1210 06:25:33.780660  345537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:33.800442  345537 main.go:143] libmachine: Using SSH client type: native
	I1210 06:25:33.800684  345537 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I1210 06:25:33.800700  345537 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-126107 && echo "newest-cni-126107" | sudo tee /etc/hostname
	I1210 06:25:33.947960  345537 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-126107
	
	I1210 06:25:33.948061  345537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:33.968186  345537 main.go:143] libmachine: Using SSH client type: native
	I1210 06:25:33.968388  345537 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I1210 06:25:33.968404  345537 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-126107' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-126107/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-126107' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 06:25:34.105319  345537 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:25:34.105346  345537 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22089-8832/.minikube CaCertPath:/home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22089-8832/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22089-8832/.minikube}
	I1210 06:25:34.105372  345537 ubuntu.go:190] setting up certificates
	I1210 06:25:34.105382  345537 provision.go:84] configureAuth start
	I1210 06:25:34.105437  345537 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-126107
	I1210 06:25:34.124550  345537 provision.go:143] copyHostCerts
	I1210 06:25:34.124621  345537 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-8832/.minikube/ca.pem, removing ...
	I1210 06:25:34.124635  345537 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-8832/.minikube/ca.pem
	I1210 06:25:34.124709  345537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22089-8832/.minikube/ca.pem (1078 bytes)
	I1210 06:25:34.124824  345537 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-8832/.minikube/cert.pem, removing ...
	I1210 06:25:34.124833  345537 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-8832/.minikube/cert.pem
	I1210 06:25:34.124860  345537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22089-8832/.minikube/cert.pem (1123 bytes)
	I1210 06:25:34.124930  345537 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-8832/.minikube/key.pem, removing ...
	I1210 06:25:34.124937  345537 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-8832/.minikube/key.pem
	I1210 06:25:34.124961  345537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22089-8832/.minikube/key.pem (1675 bytes)
	I1210 06:25:34.125025  345537 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22089-8832/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca-key.pem org=jenkins.newest-cni-126107 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-126107]
	I1210 06:25:34.193303  345537 provision.go:177] copyRemoteCerts
	I1210 06:25:34.193367  345537 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 06:25:34.193402  345537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:34.212955  345537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/newest-cni-126107/id_rsa Username:docker}
	I1210 06:25:34.311230  345537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 06:25:34.330510  345537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 06:25:34.354851  345537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 06:25:34.373898  345537 provision.go:87] duration metric: took 268.50473ms to configureAuth
	I1210 06:25:34.373925  345537 ubuntu.go:206] setting minikube options for container-runtime
	I1210 06:25:34.374105  345537 config.go:182] Loaded profile config "newest-cni-126107": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 06:25:34.374216  345537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:34.394026  345537 main.go:143] libmachine: Using SSH client type: native
	I1210 06:25:34.394302  345537 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I1210 06:25:34.394331  345537 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 06:25:34.690980  345537 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 06:25:34.691015  345537 machine.go:97] duration metric: took 4.064140388s to provisionDockerMachine
	I1210 06:25:34.691029  345537 start.go:293] postStartSetup for "newest-cni-126107" (driver="docker")
	I1210 06:25:34.691080  345537 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 06:25:34.691147  345537 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 06:25:34.691183  345537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:34.710980  345537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/newest-cni-126107/id_rsa Username:docker}
	I1210 06:25:34.809193  345537 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 06:25:34.813269  345537 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 06:25:34.813313  345537 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 06:25:34.813327  345537 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-8832/.minikube/addons for local assets ...
	I1210 06:25:34.813383  345537 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-8832/.minikube/files for local assets ...
	I1210 06:25:34.813505  345537 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-8832/.minikube/files/etc/ssl/certs/123742.pem -> 123742.pem in /etc/ssl/certs
	I1210 06:25:34.813619  345537 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 06:25:34.821859  345537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/files/etc/ssl/certs/123742.pem --> /etc/ssl/certs/123742.pem (1708 bytes)
	I1210 06:25:34.840261  345537 start.go:296] duration metric: took 149.200393ms for postStartSetup
	I1210 06:25:34.840342  345537 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:25:34.840397  345537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:34.859669  345537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/newest-cni-126107/id_rsa Username:docker}
	I1210 06:25:34.953162  345537 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 06:25:34.958380  345537 fix.go:56] duration metric: took 4.654596627s for fixHost
	I1210 06:25:34.958415  345537 start.go:83] releasing machines lock for "newest-cni-126107", held for 4.654651631s
	I1210 06:25:34.958495  345537 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-126107
	I1210 06:25:34.980057  345537 ssh_runner.go:195] Run: cat /version.json
	I1210 06:25:34.980079  345537 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 06:25:34.980145  345537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:34.980146  345537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:35.002231  345537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/newest-cni-126107/id_rsa Username:docker}
	I1210 06:25:35.002423  345537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/newest-cni-126107/id_rsa Username:docker}
	I1210 06:25:35.094904  345537 ssh_runner.go:195] Run: systemctl --version
	I1210 06:25:35.153916  345537 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 06:25:35.191258  345537 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 06:25:35.196136  345537 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 06:25:35.196197  345537 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 06:25:35.204676  345537 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 06:25:35.204704  345537 start.go:496] detecting cgroup driver to use...
	I1210 06:25:35.204735  345537 detect.go:190] detected "systemd" cgroup driver on host os
	I1210 06:25:35.204795  345537 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 06:25:35.220331  345537 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 06:25:35.233476  345537 docker.go:218] disabling cri-docker service (if available) ...
	I1210 06:25:35.233536  345537 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 06:25:35.248932  345537 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 06:25:35.263006  345537 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 06:25:35.344446  345537 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 06:25:35.426091  345537 docker.go:234] disabling docker service ...
	I1210 06:25:35.426167  345537 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 06:25:35.440762  345537 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 06:25:35.453694  345537 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 06:25:35.544590  345537 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 06:25:35.623824  345537 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 06:25:35.636961  345537 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:25:35.651831  345537 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 06:25:35.651879  345537 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:25:35.661164  345537 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1210 06:25:35.661233  345537 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:25:35.670965  345537 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:25:35.681369  345537 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:25:35.691670  345537 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 06:25:35.702453  345537 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:25:35.712207  345537 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:25:35.722297  345537 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:25:35.731324  345537 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 06:25:35.740103  345537 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 06:25:35.748317  345537 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:25:35.839721  345537 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 06:25:35.979010  345537 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 06:25:35.979076  345537 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 06:25:35.983132  345537 start.go:564] Will wait 60s for crictl version
	I1210 06:25:35.983199  345537 ssh_runner.go:195] Run: which crictl
	I1210 06:25:35.986794  345537 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 06:25:36.012672  345537 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 06:25:36.012774  345537 ssh_runner.go:195] Run: crio --version
	I1210 06:25:36.047570  345537 ssh_runner.go:195] Run: crio --version
	I1210 06:25:36.081122  345537 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1210 06:25:36.085692  345537 cli_runner.go:164] Run: docker network inspect newest-cni-126107 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:25:36.104980  345537 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1210 06:25:36.109299  345537 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:25:36.123029  345537 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1210 06:25:36.124561  345537 kubeadm.go:884] updating cluster {Name:newest-cni-126107 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-126107 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 06:25:36.124698  345537 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1210 06:25:36.124754  345537 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:25:36.163641  345537 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 06:25:36.163668  345537 crio.go:433] Images already preloaded, skipping extraction
	I1210 06:25:36.163725  345537 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:25:36.192283  345537 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 06:25:36.192308  345537 cache_images.go:86] Images are preloaded, skipping loading
	I1210 06:25:36.192319  345537 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 crio true true} ...
	I1210 06:25:36.192485  345537 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-126107 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-126107 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 06:25:36.192577  345537 ssh_runner.go:195] Run: crio config
	I1210 06:25:36.242010  345537 cni.go:84] Creating CNI manager for ""
	I1210 06:25:36.242038  345537 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:25:36.242057  345537 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1210 06:25:36.242093  345537 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-126107 NodeName:newest-cni-126107 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 06:25:36.242249  345537 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-126107"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 06:25:36.242323  345537 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1210 06:25:36.252671  345537 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 06:25:36.252732  345537 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 06:25:36.263066  345537 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1210 06:25:36.278849  345537 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1210 06:25:36.292835  345537 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1210 06:25:36.307500  345537 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1210 06:25:36.311352  345537 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:25:36.322425  345537 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:25:36.407644  345537 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:25:36.428133  345537 certs.go:69] Setting up /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107 for IP: 192.168.85.2
	I1210 06:25:36.428155  345537 certs.go:195] generating shared ca certs ...
	I1210 06:25:36.428176  345537 certs.go:227] acquiring lock for ca certs: {Name:mkfe434cecfa5233603e8d01fb39a21abb4f8ca4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:25:36.428342  345537 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22089-8832/.minikube/ca.key
	I1210 06:25:36.428400  345537 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22089-8832/.minikube/proxy-client-ca.key
	I1210 06:25:36.428414  345537 certs.go:257] generating profile certs ...
	I1210 06:25:36.428543  345537 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/client.key
	I1210 06:25:36.428653  345537 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/apiserver.key.23b909bf
	I1210 06:25:36.428711  345537 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/proxy-client.key
	I1210 06:25:36.428855  345537 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/12374.pem (1338 bytes)
	W1210 06:25:36.428888  345537 certs.go:480] ignoring /home/jenkins/minikube-integration/22089-8832/.minikube/certs/12374_empty.pem, impossibly tiny 0 bytes
	I1210 06:25:36.428900  345537 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca-key.pem (1679 bytes)
	I1210 06:25:36.428925  345537 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem (1078 bytes)
	I1210 06:25:36.428958  345537 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/cert.pem (1123 bytes)
	I1210 06:25:36.428996  345537 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/key.pem (1675 bytes)
	I1210 06:25:36.429054  345537 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/files/etc/ssl/certs/123742.pem (1708 bytes)
	I1210 06:25:36.429757  345537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 06:25:36.450791  345537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 06:25:36.473953  345537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 06:25:36.495582  345537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 06:25:36.521273  345537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 06:25:36.544440  345537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 06:25:36.563566  345537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 06:25:36.583534  345537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 06:25:36.604712  345537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/files/etc/ssl/certs/123742.pem --> /usr/share/ca-certificates/123742.pem (1708 bytes)
	I1210 06:25:36.622925  345537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 06:25:36.644601  345537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/certs/12374.pem --> /usr/share/ca-certificates/12374.pem (1338 bytes)
	I1210 06:25:36.663142  345537 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 06:25:36.677349  345537 ssh_runner.go:195] Run: openssl version
	I1210 06:25:36.683704  345537 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/123742.pem
	I1210 06:25:36.691269  345537 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/123742.pem /etc/ssl/certs/123742.pem
	I1210 06:25:36.699881  345537 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/123742.pem
	I1210 06:25:36.704542  345537 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:52 /usr/share/ca-certificates/123742.pem
	I1210 06:25:36.704607  345537 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/123742.pem
	I1210 06:25:36.741885  345537 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 06:25:36.749752  345537 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:25:36.758272  345537 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 06:25:36.768438  345537 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:25:36.772964  345537 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:25:36.773015  345537 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:25:36.810995  345537 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 06:25:36.818904  345537 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12374.pem
	I1210 06:25:36.827591  345537 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12374.pem /etc/ssl/certs/12374.pem
	I1210 06:25:36.836196  345537 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12374.pem
	I1210 06:25:36.840276  345537 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:52 /usr/share/ca-certificates/12374.pem
	I1210 06:25:36.840333  345537 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12374.pem
	I1210 06:25:36.880057  345537 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 06:25:36.893799  345537 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:25:36.899891  345537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 06:25:36.939598  345537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 06:25:36.986565  345537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 06:25:37.033737  345537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 06:25:37.088093  345537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 06:25:37.139249  345537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 06:25:37.179916  345537 kubeadm.go:401] StartCluster: {Name:newest-cni-126107 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-126107 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:25:37.180037  345537 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 06:25:37.180128  345537 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:25:37.214651  345537 cri.go:89] found id: "2a2db7437d32a2b904b0d325d8814b054a94b5f466e98eaa0b90cde7bfed80c0"
	I1210 06:25:37.214678  345537 cri.go:89] found id: "9503f5a9aae53addbfb52e5d4088bf4caff61cd80df691ee52d82c0aae7e9a7c"
	I1210 06:25:37.214685  345537 cri.go:89] found id: "9cc3c395184c12a3759801f1587207d9b0431f0494a36ccbf5f56ab01df6ba76"
	I1210 06:25:37.214691  345537 cri.go:89] found id: "d55554d77c312dbafd4b804752687f64bb10aeb9c0ec85e5b2d7595fd1258bf6"
	I1210 06:25:37.214695  345537 cri.go:89] found id: ""
	I1210 06:25:37.214743  345537 ssh_runner.go:195] Run: sudo runc list -f json
	W1210 06:25:37.228831  345537 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:25:37Z" level=error msg="open /run/runc: no such file or directory"
	I1210 06:25:37.228906  345537 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 06:25:37.239408  345537 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 06:25:37.239432  345537 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 06:25:37.239500  345537 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 06:25:37.248945  345537 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:25:37.249662  345537 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-126107" does not appear in /home/jenkins/minikube-integration/22089-8832/kubeconfig
	I1210 06:25:37.249998  345537 kubeconfig.go:62] /home/jenkins/minikube-integration/22089-8832/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-126107" cluster setting kubeconfig missing "newest-cni-126107" context setting]
	I1210 06:25:37.250682  345537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/kubeconfig: {Name:mk2d0febd8c6a30a71f02d20e2057fd6d147cd76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:25:37.252543  345537 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 06:25:37.262369  345537 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1210 06:25:37.262409  345537 kubeadm.go:602] duration metric: took 22.970004ms to restartPrimaryControlPlane
	I1210 06:25:37.262426  345537 kubeadm.go:403] duration metric: took 82.529817ms to StartCluster
	I1210 06:25:37.262445  345537 settings.go:142] acquiring lock: {Name:mkcfa52e2e09cf8266d26c2d1d1f162454a79515 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:25:37.262545  345537 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22089-8832/kubeconfig
	I1210 06:25:37.263655  345537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/kubeconfig: {Name:mk2d0febd8c6a30a71f02d20e2057fd6d147cd76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:25:37.263975  345537 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 06:25:37.264116  345537 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 06:25:37.264214  345537 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-126107"
	I1210 06:25:37.264218  345537 config.go:182] Loaded profile config "newest-cni-126107": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 06:25:37.264232  345537 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-126107"
	W1210 06:25:37.264245  345537 addons.go:248] addon storage-provisioner should already be in state true
	I1210 06:25:37.264239  345537 addons.go:70] Setting dashboard=true in profile "newest-cni-126107"
	I1210 06:25:37.264258  345537 addons.go:239] Setting addon dashboard=true in "newest-cni-126107"
	I1210 06:25:37.264262  345537 addons.go:70] Setting default-storageclass=true in profile "newest-cni-126107"
	W1210 06:25:37.264266  345537 addons.go:248] addon dashboard should already be in state true
	I1210 06:25:37.264276  345537 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-126107"
	I1210 06:25:37.264289  345537 host.go:66] Checking if "newest-cni-126107" exists ...
	I1210 06:25:37.264276  345537 host.go:66] Checking if "newest-cni-126107" exists ...
	I1210 06:25:37.264613  345537 cli_runner.go:164] Run: docker container inspect newest-cni-126107 --format={{.State.Status}}
	I1210 06:25:37.264788  345537 cli_runner.go:164] Run: docker container inspect newest-cni-126107 --format={{.State.Status}}
	I1210 06:25:37.264788  345537 cli_runner.go:164] Run: docker container inspect newest-cni-126107 --format={{.State.Status}}
	I1210 06:25:37.267674  345537 out.go:179] * Verifying Kubernetes components...
	I1210 06:25:37.269553  345537 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:25:37.291399  345537 addons.go:239] Setting addon default-storageclass=true in "newest-cni-126107"
	W1210 06:25:37.291482  345537 addons.go:248] addon default-storageclass should already be in state true
	I1210 06:25:37.291526  345537 host.go:66] Checking if "newest-cni-126107" exists ...
	I1210 06:25:37.292024  345537 cli_runner.go:164] Run: docker container inspect newest-cni-126107 --format={{.State.Status}}
	I1210 06:25:37.293251  345537 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:25:37.294720  345537 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:25:37.294739  345537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 06:25:37.294792  345537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:37.294941  345537 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1210 06:25:37.297597  345537 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	
	
	==> CRI-O <==
	Dec 10 06:24:52 default-k8s-diff-port-643991 crio[571]: time="2025-12-10T06:24:52.734316391Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 10 06:24:52 default-k8s-diff-port-643991 crio[571]: time="2025-12-10T06:24:52.738207346Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 10 06:24:52 default-k8s-diff-port-643991 crio[571]: time="2025-12-10T06:24:52.738231424Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 10 06:25:07 default-k8s-diff-port-643991 crio[571]: time="2025-12-10T06:25:07.881657326Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=aef6e236-85da-48a1-a8da-e065f9fcfce3 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:25:07 default-k8s-diff-port-643991 crio[571]: time="2025-12-10T06:25:07.884988194Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=c49197bc-590a-4671-8f29-2ed7c2384b9a name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:25:07 default-k8s-diff-port-643991 crio[571]: time="2025-12-10T06:25:07.888354957Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sqs8v/dashboard-metrics-scraper" id=dbfd6f11-deab-4da5-b00c-b650d4220805 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:25:07 default-k8s-diff-port-643991 crio[571]: time="2025-12-10T06:25:07.888532538Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:25:07 default-k8s-diff-port-643991 crio[571]: time="2025-12-10T06:25:07.89507295Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:25:07 default-k8s-diff-port-643991 crio[571]: time="2025-12-10T06:25:07.895638703Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:25:07 default-k8s-diff-port-643991 crio[571]: time="2025-12-10T06:25:07.935098253Z" level=info msg="Created container b4a88d6bb8c02020a551d52825b702426f9b22f824c2ca79978cc5ac8b432bfe: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sqs8v/dashboard-metrics-scraper" id=dbfd6f11-deab-4da5-b00c-b650d4220805 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:25:07 default-k8s-diff-port-643991 crio[571]: time="2025-12-10T06:25:07.93585391Z" level=info msg="Starting container: b4a88d6bb8c02020a551d52825b702426f9b22f824c2ca79978cc5ac8b432bfe" id=a63b51b2-e550-419c-acef-3c1339f6f631 name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 06:25:07 default-k8s-diff-port-643991 crio[571]: time="2025-12-10T06:25:07.938442827Z" level=info msg="Started container" PID=1779 containerID=b4a88d6bb8c02020a551d52825b702426f9b22f824c2ca79978cc5ac8b432bfe description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sqs8v/dashboard-metrics-scraper id=a63b51b2-e550-419c-acef-3c1339f6f631 name=/runtime.v1.RuntimeService/StartContainer sandboxID=872e55770732aa127aaf222fed14f4c77f04cd5c08d947cdec44cf88aee20465
	Dec 10 06:25:08 default-k8s-diff-port-643991 crio[571]: time="2025-12-10T06:25:08.009172219Z" level=info msg="Removing container: 339ff21d2dfe2e75387928abe4db6fd05d843df487685d1886c8728bbdb380c8" id=be4998b3-2560-4918-bbee-dea8844dab27 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 10 06:25:08 default-k8s-diff-port-643991 crio[571]: time="2025-12-10T06:25:08.020378611Z" level=info msg="Removed container 339ff21d2dfe2e75387928abe4db6fd05d843df487685d1886c8728bbdb380c8: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sqs8v/dashboard-metrics-scraper" id=be4998b3-2560-4918-bbee-dea8844dab27 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 10 06:25:13 default-k8s-diff-port-643991 crio[571]: time="2025-12-10T06:25:13.024408777Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=942903a7-08dc-4a22-a5bc-45c630cbd537 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:25:13 default-k8s-diff-port-643991 crio[571]: time="2025-12-10T06:25:13.025546355Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=e6bc6146-881a-4dfe-b9eb-d239312249af name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:25:13 default-k8s-diff-port-643991 crio[571]: time="2025-12-10T06:25:13.026723318Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=b3196b3e-e032-4f6e-a8f2-75babe5a8989 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:25:13 default-k8s-diff-port-643991 crio[571]: time="2025-12-10T06:25:13.02686167Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:25:13 default-k8s-diff-port-643991 crio[571]: time="2025-12-10T06:25:13.031262228Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:25:13 default-k8s-diff-port-643991 crio[571]: time="2025-12-10T06:25:13.031454195Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/8eeceb616a3fb87fa178f86183db3b3d874408a8a028f2536f830e4edfbf4bb4/merged/etc/passwd: no such file or directory"
	Dec 10 06:25:13 default-k8s-diff-port-643991 crio[571]: time="2025-12-10T06:25:13.031499158Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/8eeceb616a3fb87fa178f86183db3b3d874408a8a028f2536f830e4edfbf4bb4/merged/etc/group: no such file or directory"
	Dec 10 06:25:13 default-k8s-diff-port-643991 crio[571]: time="2025-12-10T06:25:13.031785103Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:25:13 default-k8s-diff-port-643991 crio[571]: time="2025-12-10T06:25:13.066163925Z" level=info msg="Created container 86d39ffede9e7dae753d54daef1a4829b9a1da4b5187509b5e41ec4e0a49ad66: kube-system/storage-provisioner/storage-provisioner" id=b3196b3e-e032-4f6e-a8f2-75babe5a8989 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:25:13 default-k8s-diff-port-643991 crio[571]: time="2025-12-10T06:25:13.066852198Z" level=info msg="Starting container: 86d39ffede9e7dae753d54daef1a4829b9a1da4b5187509b5e41ec4e0a49ad66" id=325639e9-8eea-4b73-bcae-18794aa7f431 name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 06:25:13 default-k8s-diff-port-643991 crio[571]: time="2025-12-10T06:25:13.069216405Z" level=info msg="Started container" PID=1797 containerID=86d39ffede9e7dae753d54daef1a4829b9a1da4b5187509b5e41ec4e0a49ad66 description=kube-system/storage-provisioner/storage-provisioner id=325639e9-8eea-4b73-bcae-18794aa7f431 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8d503f6924a50a4effa8ce0b593441298bd1f2210180cd8bb93dee5c3055d825
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	86d39ffede9e7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           25 seconds ago      Running             storage-provisioner         1                   8d503f6924a50       storage-provisioner                                    kube-system
	b4a88d6bb8c02       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           31 seconds ago      Exited              dashboard-metrics-scraper   2                   872e55770732a       dashboard-metrics-scraper-6ffb444bf9-sqs8v             kubernetes-dashboard
	422df2b0f39e0       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   50 seconds ago      Running             kubernetes-dashboard        0                   ad7f6c4e7fe6b       kubernetes-dashboard-855c9754f9-llkbc                  kubernetes-dashboard
	0d4cc9c331feb       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           56 seconds ago      Running             busybox                     1                   fc1b26dd40cc4       busybox                                                default
	bbaf4ecac9fe9       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           56 seconds ago      Running             coredns                     0                   fede07f270728       coredns-66bc5c9577-znsz6                               kube-system
	08a9a203d5bf2       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           56 seconds ago      Running             kindnet-cni                 0                   7cb9b8bf9b4cc       kindnet-7j6ns                                          kube-system
	6c44f81745b50       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           56 seconds ago      Exited              storage-provisioner         0                   8d503f6924a50       storage-provisioner                                    kube-system
	fbe2e6e498d24       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                           56 seconds ago      Running             kube-proxy                  0                   82256b58103e3       kube-proxy-mkpzc                                       kube-system
	8cb208db60562       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                           59 seconds ago      Running             kube-apiserver              0                   1707693fe89c9       kube-apiserver-default-k8s-diff-port-643991            kube-system
	939f270b7e908       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                           59 seconds ago      Running             kube-controller-manager     0                   df9d09efbd2cf       kube-controller-manager-default-k8s-diff-port-643991   kube-system
	9b258fc04f844       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                           59 seconds ago      Running             kube-scheduler              0                   0f86fbd80a599       kube-scheduler-default-k8s-diff-port-643991            kube-system
	e3522bb390040       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           59 seconds ago      Running             etcd                        0                   29a45a9cede7c       etcd-default-k8s-diff-port-643991                      kube-system
	
	
	==> coredns [bbaf4ecac9fe942940a527bc2c9d5ac2c8852370073f22a48aa6290e24b8f4fc] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39326 - 41500 "HINFO IN 2741511875219942973.7423051502085985891. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.054343815s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-643991
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-643991
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=edc6abd3c0573b88c7a02dc35aa0b985627fa3e9
	                    minikube.k8s.io/name=default-k8s-diff-port-643991
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T06_23_41_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 06:23:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-643991
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 06:25:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 06:25:12 +0000   Wed, 10 Dec 2025 06:23:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 06:25:12 +0000   Wed, 10 Dec 2025 06:23:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 06:25:12 +0000   Wed, 10 Dec 2025 06:23:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Dec 2025 06:25:12 +0000   Wed, 10 Dec 2025 06:23:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-643991
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 0992b7e47f4f804d2f02c3066938a460
	  System UUID:                dd07e36d-8369-41a9-8fa1-68f38e5abb55
	  Boot ID:                    cce7104c-1270-4b6b-af66-b04ce0de633c
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 coredns-66bc5c9577-znsz6                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     113s
	  kube-system                 etcd-default-k8s-diff-port-643991                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         119s
	  kube-system                 kindnet-7j6ns                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      114s
	  kube-system                 kube-apiserver-default-k8s-diff-port-643991             250m (3%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-643991    200m (2%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-proxy-mkpzc                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-scheduler-default-k8s-diff-port-643991             100m (1%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-sqs8v              0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-llkbc                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 112s               kube-proxy       
	  Normal  Starting                 56s                kube-proxy       
	  Normal  NodeHasSufficientMemory  119s               kubelet          Node default-k8s-diff-port-643991 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    119s               kubelet          Node default-k8s-diff-port-643991 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     119s               kubelet          Node default-k8s-diff-port-643991 status is now: NodeHasSufficientPID
	  Normal  Starting                 119s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           115s               node-controller  Node default-k8s-diff-port-643991 event: Registered Node default-k8s-diff-port-643991 in Controller
	  Normal  NodeReady                102s               kubelet          Node default-k8s-diff-port-643991 status is now: NodeReady
	  Normal  Starting                 61s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  61s (x8 over 61s)  kubelet          Node default-k8s-diff-port-643991 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    61s (x8 over 61s)  kubelet          Node default-k8s-diff-port-643991 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s (x8 over 61s)  kubelet          Node default-k8s-diff-port-643991 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           55s                node-controller  Node default-k8s-diff-port-643991 event: Registered Node default-k8s-diff-port-643991 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 67 21 4d 5d 12 08 06
	[Dec10 06:21] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 7e b1 cc cb 4a c1 08 06
	[  +0.000451] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f6 67 21 4d 5d 12 08 06
	[ +47.984386] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 4e cf 84 e2 64 08 06
	[  +1.136322] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000001] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0e cf a5 c8 c4 7c 08 06
	[  +0.000001] ll header: 00000000: ff ff ff ff ff ff 4e 6d f9 62 31 99 08 06
	[Dec10 06:22] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 16 a6 95 b8 c2 8a 08 06
	[ +10.598490] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 42 35 90 e5 6e e9 08 06
	[  +0.000401] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 36 4e cf 84 e2 64 08 06
	[ +28.872835] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a 53 b5 51 38 03 08 06
	[  +0.000413] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4e 6d f9 62 31 99 08 06
	[  +9.820727] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6e c5 0b 85 ba 10 08 06
	[  +0.000485] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 16 a6 95 b8 c2 8a 08 06
	
	
	==> etcd [e3522bb390040c1d32dccb4cfcacd9939770bc3064f9bb9dac4051ec77431f13] <==
	{"level":"warn","ts":"2025-12-10T06:24:40.599232Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:40.607786Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:40.616767Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:40.623342Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:40.630987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:40.639538Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:40.646963Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:40.656660Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:40.664544Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:40.671873Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:40.680758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:40.688223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:40.696282Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:40.703869Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:40.713057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:40.724052Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:40.732788Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:40.740244Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:40.747592Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:40.763030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:40.770147Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:40.777674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:24:40.833768Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:25:02.623537Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"124.191332ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638357210171067668 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.76.2\" mod_revision:590 > success:<request_put:<key:\"/registry/masterleases/192.168.76.2\" value_size:65 lease:6414985173316291858 >> failure:<request_range:<key:\"/registry/masterleases/192.168.76.2\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-12-10T06:25:02.623670Z","caller":"traceutil/trace.go:172","msg":"trace[1910628044] transaction","detail":"{read_only:false; response_revision:604; number_of_response:1; }","duration":"129.299009ms","start":"2025-12-10T06:25:02.494358Z","end":"2025-12-10T06:25:02.623657Z","steps":["trace[1910628044] 'compare'  (duration: 124.055635ms)"],"step_count":1}
	
	
	==> kernel <==
	 06:25:39 up  1:08,  0 user,  load average: 4.53, 4.77, 3.06
	Linux default-k8s-diff-port-643991 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [08a9a203d5bf21ac5648ef3fb7638884576e075da5e173791b5158e58f55d0a9] <==
	I1210 06:24:42.414319       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1210 06:24:42.508884       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1210 06:24:42.509065       1 main.go:148] setting mtu 1500 for CNI 
	I1210 06:24:42.509084       1 main.go:178] kindnetd IP family: "ipv4"
	I1210 06:24:42.509125       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-10T06:24:42Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1210 06:24:42.712625       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1210 06:24:42.712682       1 controller.go:381] "Waiting for informer caches to sync"
	I1210 06:24:42.712696       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1210 06:24:42.810104       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1210 06:24:43.209034       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1210 06:24:43.209083       1 metrics.go:72] Registering metrics
	I1210 06:24:43.209551       1 controller.go:711] "Syncing nftables rules"
	I1210 06:24:52.712446       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1210 06:24:52.712552       1 main.go:301] handling current node
	I1210 06:25:02.713237       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1210 06:25:02.713298       1 main.go:301] handling current node
	I1210 06:25:12.713040       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1210 06:25:12.713124       1 main.go:301] handling current node
	I1210 06:25:22.713622       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1210 06:25:22.713661       1 main.go:301] handling current node
	I1210 06:25:32.713842       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1210 06:25:32.713885       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8cb208db605620bb50e399feca07150e2a59edcd3b1bef56613bc9bf58d33577] <==
	I1210 06:24:41.344201       1 aggregator.go:171] initial CRD sync complete...
	I1210 06:24:41.344219       1 autoregister_controller.go:144] Starting autoregister controller
	I1210 06:24:41.344227       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1210 06:24:41.344234       1 cache.go:39] Caches are synced for autoregister controller
	I1210 06:24:41.347141       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1210 06:24:41.347352       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	E1210 06:24:41.350155       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1210 06:24:41.355105       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1210 06:24:41.358771       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1210 06:24:41.358785       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1210 06:24:41.358810       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1210 06:24:41.365312       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1210 06:24:41.379365       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1210 06:24:41.639392       1 controller.go:667] quota admission added evaluator for: namespaces
	I1210 06:24:41.675511       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1210 06:24:41.697889       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1210 06:24:41.710438       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1210 06:24:41.718851       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1210 06:24:41.763599       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.2.59"}
	I1210 06:24:41.776138       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.168.2"}
	I1210 06:24:42.247581       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1210 06:24:44.798527       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1210 06:24:45.099166       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1210 06:24:45.099166       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1210 06:24:45.148447       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [939f270b7e90898a7f21a52e2572b0814d28cd556fbbc16d377a84363bcff231] <==
	I1210 06:24:44.645904       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1210 06:24:44.645983       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1210 06:24:44.651205       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1210 06:24:44.651232       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1210 06:24:44.652408       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 06:24:44.653596       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1210 06:24:44.675868       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1210 06:24:44.694603       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1210 06:24:44.694629       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1210 06:24:44.694644       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1210 06:24:44.694777       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1210 06:24:44.695841       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1210 06:24:44.695870       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1210 06:24:44.695890       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1210 06:24:44.695926       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1210 06:24:44.695932       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1210 06:24:44.695931       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1210 06:24:44.697769       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1210 06:24:44.697787       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1210 06:24:44.697795       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1210 06:24:44.700105       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1210 06:24:44.700237       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 06:24:44.702344       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1210 06:24:44.704614       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1210 06:24:44.718980       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [fbe2e6e498d241f7b238abce28affa8ff3c952a61083e0b428a07c7fc0bac104] <==
	I1210 06:24:42.257266       1 server_linux.go:53] "Using iptables proxy"
	I1210 06:24:42.330681       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1210 06:24:42.431553       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1210 06:24:42.431601       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1210 06:24:42.431733       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 06:24:42.453767       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1210 06:24:42.453832       1 server_linux.go:132] "Using iptables Proxier"
	I1210 06:24:42.460227       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 06:24:42.460710       1 server.go:527] "Version info" version="v1.34.2"
	I1210 06:24:42.460750       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 06:24:42.462267       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 06:24:42.462292       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 06:24:42.462290       1 config.go:200] "Starting service config controller"
	I1210 06:24:42.463035       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 06:24:42.463142       1 config.go:106] "Starting endpoint slice config controller"
	I1210 06:24:42.463156       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 06:24:42.463553       1 config.go:309] "Starting node config controller"
	I1210 06:24:42.464374       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 06:24:42.464400       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 06:24:42.563210       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1210 06:24:42.563236       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1210 06:24:42.563278       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [9b258fc04f844289ade513f0963c9827dce6e9c67835e2e2ffc484b28ca58cb9] <==
	I1210 06:24:41.150833       1 serving.go:386] Generated self-signed cert in-memory
	I1210 06:24:41.749030       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1210 06:24:41.749065       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 06:24:41.753546       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 06:24:41.753563       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1210 06:24:41.753586       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 06:24:41.753598       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1210 06:24:41.753554       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1210 06:24:41.753651       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1210 06:24:41.754032       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1210 06:24:41.754099       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1210 06:24:41.854725       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1210 06:24:41.854829       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1210 06:24:41.854756       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 10 06:24:45 default-k8s-diff-port-643991 kubelet[731]: I1210 06:24:45.403577     731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/390a0b83-fd9c-42b8-8732-362bbb3a7e9a-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-llkbc\" (UID: \"390a0b83-fd9c-42b8-8732-362bbb3a7e9a\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-llkbc"
	Dec 10 06:24:45 default-k8s-diff-port-643991 kubelet[731]: I1210 06:24:45.403598     731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bq2lh\" (UniqueName: \"kubernetes.io/projected/390a0b83-fd9c-42b8-8732-362bbb3a7e9a-kube-api-access-bq2lh\") pod \"kubernetes-dashboard-855c9754f9-llkbc\" (UID: \"390a0b83-fd9c-42b8-8732-362bbb3a7e9a\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-llkbc"
	Dec 10 06:24:49 default-k8s-diff-port-643991 kubelet[731]: I1210 06:24:49.964875     731 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 10 06:24:50 default-k8s-diff-port-643991 kubelet[731]: I1210 06:24:50.958271     731 scope.go:117] "RemoveContainer" containerID="ce49e56c5b17b1cf6a478d5d972b0f65514ace4dcb01b9966653837a659acb5d"
	Dec 10 06:24:50 default-k8s-diff-port-643991 kubelet[731]: I1210 06:24:50.968194     731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-llkbc" podStartSLOduration=3.259869578 podStartE2EDuration="5.968169656s" podCreationTimestamp="2025-12-10 06:24:45 +0000 UTC" firstStartedPulling="2025-12-10 06:24:45.640752776 +0000 UTC m=+6.881605108" lastFinishedPulling="2025-12-10 06:24:48.349052873 +0000 UTC m=+9.589905186" observedRunningTime="2025-12-10 06:24:48.983409641 +0000 UTC m=+10.224261974" watchObservedRunningTime="2025-12-10 06:24:50.968169656 +0000 UTC m=+12.209021988"
	Dec 10 06:24:51 default-k8s-diff-port-643991 kubelet[731]: I1210 06:24:51.963567     731 scope.go:117] "RemoveContainer" containerID="ce49e56c5b17b1cf6a478d5d972b0f65514ace4dcb01b9966653837a659acb5d"
	Dec 10 06:24:51 default-k8s-diff-port-643991 kubelet[731]: I1210 06:24:51.963920     731 scope.go:117] "RemoveContainer" containerID="339ff21d2dfe2e75387928abe4db6fd05d843df487685d1886c8728bbdb380c8"
	Dec 10 06:24:51 default-k8s-diff-port-643991 kubelet[731]: E1210 06:24:51.964108     731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-sqs8v_kubernetes-dashboard(be53b760-db92-41c0-afde-2722161bed6a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sqs8v" podUID="be53b760-db92-41c0-afde-2722161bed6a"
	Dec 10 06:24:52 default-k8s-diff-port-643991 kubelet[731]: I1210 06:24:52.970012     731 scope.go:117] "RemoveContainer" containerID="339ff21d2dfe2e75387928abe4db6fd05d843df487685d1886c8728bbdb380c8"
	Dec 10 06:24:52 default-k8s-diff-port-643991 kubelet[731]: E1210 06:24:52.970186     731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-sqs8v_kubernetes-dashboard(be53b760-db92-41c0-afde-2722161bed6a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sqs8v" podUID="be53b760-db92-41c0-afde-2722161bed6a"
	Dec 10 06:24:55 default-k8s-diff-port-643991 kubelet[731]: I1210 06:24:55.669509     731 scope.go:117] "RemoveContainer" containerID="339ff21d2dfe2e75387928abe4db6fd05d843df487685d1886c8728bbdb380c8"
	Dec 10 06:24:55 default-k8s-diff-port-643991 kubelet[731]: E1210 06:24:55.669697     731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-sqs8v_kubernetes-dashboard(be53b760-db92-41c0-afde-2722161bed6a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sqs8v" podUID="be53b760-db92-41c0-afde-2722161bed6a"
	Dec 10 06:25:07 default-k8s-diff-port-643991 kubelet[731]: I1210 06:25:07.881072     731 scope.go:117] "RemoveContainer" containerID="339ff21d2dfe2e75387928abe4db6fd05d843df487685d1886c8728bbdb380c8"
	Dec 10 06:25:08 default-k8s-diff-port-643991 kubelet[731]: I1210 06:25:08.007794     731 scope.go:117] "RemoveContainer" containerID="339ff21d2dfe2e75387928abe4db6fd05d843df487685d1886c8728bbdb380c8"
	Dec 10 06:25:08 default-k8s-diff-port-643991 kubelet[731]: I1210 06:25:08.008013     731 scope.go:117] "RemoveContainer" containerID="b4a88d6bb8c02020a551d52825b702426f9b22f824c2ca79978cc5ac8b432bfe"
	Dec 10 06:25:08 default-k8s-diff-port-643991 kubelet[731]: E1210 06:25:08.008262     731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-sqs8v_kubernetes-dashboard(be53b760-db92-41c0-afde-2722161bed6a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sqs8v" podUID="be53b760-db92-41c0-afde-2722161bed6a"
	Dec 10 06:25:13 default-k8s-diff-port-643991 kubelet[731]: I1210 06:25:13.024015     731 scope.go:117] "RemoveContainer" containerID="6c44f81745b509fdff07279555f62e28970767ad905fb283b7ad65af0a2c26ad"
	Dec 10 06:25:15 default-k8s-diff-port-643991 kubelet[731]: I1210 06:25:15.669759     731 scope.go:117] "RemoveContainer" containerID="b4a88d6bb8c02020a551d52825b702426f9b22f824c2ca79978cc5ac8b432bfe"
	Dec 10 06:25:15 default-k8s-diff-port-643991 kubelet[731]: E1210 06:25:15.670367     731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-sqs8v_kubernetes-dashboard(be53b760-db92-41c0-afde-2722161bed6a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sqs8v" podUID="be53b760-db92-41c0-afde-2722161bed6a"
	Dec 10 06:25:25 default-k8s-diff-port-643991 kubelet[731]: I1210 06:25:25.881700     731 scope.go:117] "RemoveContainer" containerID="b4a88d6bb8c02020a551d52825b702426f9b22f824c2ca79978cc5ac8b432bfe"
	Dec 10 06:25:25 default-k8s-diff-port-643991 kubelet[731]: E1210 06:25:25.881902     731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-sqs8v_kubernetes-dashboard(be53b760-db92-41c0-afde-2722161bed6a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-sqs8v" podUID="be53b760-db92-41c0-afde-2722161bed6a"
	Dec 10 06:25:33 default-k8s-diff-port-643991 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 10 06:25:33 default-k8s-diff-port-643991 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 10 06:25:33 default-k8s-diff-port-643991 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:25:33 default-k8s-diff-port-643991 systemd[1]: kubelet.service: Consumed 1.844s CPU time.
	
	
	==> kubernetes-dashboard [422df2b0f39e08d2d4c6e2bf49639bf603b80a30c5394dde35005c632322f85f] <==
	2025/12/10 06:24:48 Using namespace: kubernetes-dashboard
	2025/12/10 06:24:48 Using in-cluster config to connect to apiserver
	2025/12/10 06:24:48 Using secret token for csrf signing
	2025/12/10 06:24:48 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/10 06:24:48 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/10 06:24:48 Successful initial request to the apiserver, version: v1.34.2
	2025/12/10 06:24:48 Generating JWE encryption key
	2025/12/10 06:24:48 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/10 06:24:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/10 06:24:48 Initializing JWE encryption key from synchronized object
	2025/12/10 06:24:48 Creating in-cluster Sidecar client
	2025/12/10 06:24:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/10 06:24:48 Serving insecurely on HTTP port: 9090
	2025/12/10 06:25:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/10 06:24:48 Starting overwatch
	
	
	==> storage-provisioner [6c44f81745b509fdff07279555f62e28970767ad905fb283b7ad65af0a2c26ad] <==
	I1210 06:24:42.224021       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1210 06:25:12.228379       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [86d39ffede9e7dae753d54daef1a4829b9a1da4b5187509b5e41ec4e0a49ad66] <==
	I1210 06:25:13.082536       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1210 06:25:13.090966       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1210 06:25:13.091019       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1210 06:25:13.093867       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:25:16.549401       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:25:20.810250       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:25:24.409159       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:25:27.462952       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:25:30.485898       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:25:30.490665       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1210 06:25:30.490826       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1210 06:25:30.490983       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-643991_1bc24be0-11c6-4228-a920-d5f1cc758d90!
	I1210 06:25:30.490987       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ed6e8b9e-41cf-4e31-adb7-3192df14d1bf", APIVersion:"v1", ResourceVersion:"631", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-643991_1bc24be0-11c6-4228-a920-d5f1cc758d90 became leader
	W1210 06:25:30.492899       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:25:30.496238       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1210 06:25:30.591529       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-643991_1bc24be0-11c6-4228-a920-d5f1cc758d90!
	W1210 06:25:32.499723       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:25:32.503961       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:25:34.507845       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:25:34.513514       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:25:36.519605       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:25:36.526936       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:25:38.530076       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 06:25:38.537721       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-643991 -n default-k8s-diff-port-643991
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-643991 -n default-k8s-diff-port-643991: exit status 2 (355.523964ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-643991 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (6.49s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (5.98s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-126107 --alsologtostderr -v=1
E1210 06:25:42.539416   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/auto-201263/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:25:42.545894   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/auto-201263/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:25:42.557340   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/auto-201263/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:25:42.578866   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/auto-201263/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:25:42.620428   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/auto-201263/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:25:42.701841   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/auto-201263/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-126107 --alsologtostderr -v=1: exit status 80 (2.442398755s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-126107 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 06:25:40.730147  349310 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:25:40.730458  349310 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:25:40.730482  349310 out.go:374] Setting ErrFile to fd 2...
	I1210 06:25:40.730489  349310 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:25:40.730698  349310 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8832/.minikube/bin
	I1210 06:25:40.730937  349310 out.go:368] Setting JSON to false
	I1210 06:25:40.730957  349310 mustload.go:66] Loading cluster: newest-cni-126107
	I1210 06:25:40.731360  349310 config.go:182] Loaded profile config "newest-cni-126107": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 06:25:40.731801  349310 cli_runner.go:164] Run: docker container inspect newest-cni-126107 --format={{.State.Status}}
	I1210 06:25:40.752604  349310 host.go:66] Checking if "newest-cni-126107" exists ...
	I1210 06:25:40.752860  349310 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:25:40.820586  349310 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-10 06:25:40.808221483 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:25:40.822100  349310 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765151505-21409/minikube-v1.37.0-1765151505-21409-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765151505-21409-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-126107 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1210 06:25:40.824345  349310 out.go:179] * Pausing node newest-cni-126107 ... 
	I1210 06:25:40.825580  349310 host.go:66] Checking if "newest-cni-126107" exists ...
	I1210 06:25:40.825921  349310 ssh_runner.go:195] Run: systemctl --version
	I1210 06:25:40.825971  349310 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:40.848065  349310 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/newest-cni-126107/id_rsa Username:docker}
	I1210 06:25:40.946885  349310 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:25:40.959233  349310 pause.go:52] kubelet running: true
	I1210 06:25:40.959305  349310 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1210 06:25:41.093638  349310 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1210 06:25:41.093749  349310 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1210 06:25:41.160893  349310 cri.go:89] found id: "31e4cd08f91ffa6e63d872d64de0a144e7b7de25fe0dcf9dcda8fd9394deeeb7"
	I1210 06:25:41.160914  349310 cri.go:89] found id: "4c8d653511396a1d9eae8851e8d4ea46706940e86eccaaa3b8b1c0e6b5f5805d"
	I1210 06:25:41.160917  349310 cri.go:89] found id: "2a2db7437d32a2b904b0d325d8814b054a94b5f466e98eaa0b90cde7bfed80c0"
	I1210 06:25:41.160920  349310 cri.go:89] found id: "9503f5a9aae53addbfb52e5d4088bf4caff61cd80df691ee52d82c0aae7e9a7c"
	I1210 06:25:41.160923  349310 cri.go:89] found id: "9cc3c395184c12a3759801f1587207d9b0431f0494a36ccbf5f56ab01df6ba76"
	I1210 06:25:41.160927  349310 cri.go:89] found id: "d55554d77c312dbafd4b804752687f64bb10aeb9c0ec85e5b2d7595fd1258bf6"
	I1210 06:25:41.160930  349310 cri.go:89] found id: ""
	I1210 06:25:41.160973  349310 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 06:25:41.172669  349310 retry.go:31] will retry after 138.709231ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:25:41Z" level=error msg="open /run/runc: no such file or directory"
	I1210 06:25:41.312098  349310 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:25:41.325531  349310 pause.go:52] kubelet running: false
	I1210 06:25:41.325579  349310 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1210 06:25:41.439250  349310 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1210 06:25:41.439339  349310 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1210 06:25:41.509039  349310 cri.go:89] found id: "31e4cd08f91ffa6e63d872d64de0a144e7b7de25fe0dcf9dcda8fd9394deeeb7"
	I1210 06:25:41.509071  349310 cri.go:89] found id: "4c8d653511396a1d9eae8851e8d4ea46706940e86eccaaa3b8b1c0e6b5f5805d"
	I1210 06:25:41.509076  349310 cri.go:89] found id: "2a2db7437d32a2b904b0d325d8814b054a94b5f466e98eaa0b90cde7bfed80c0"
	I1210 06:25:41.509080  349310 cri.go:89] found id: "9503f5a9aae53addbfb52e5d4088bf4caff61cd80df691ee52d82c0aae7e9a7c"
	I1210 06:25:41.509082  349310 cri.go:89] found id: "9cc3c395184c12a3759801f1587207d9b0431f0494a36ccbf5f56ab01df6ba76"
	I1210 06:25:41.509086  349310 cri.go:89] found id: "d55554d77c312dbafd4b804752687f64bb10aeb9c0ec85e5b2d7595fd1258bf6"
	I1210 06:25:41.509088  349310 cri.go:89] found id: ""
	I1210 06:25:41.509132  349310 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 06:25:41.521026  349310 retry.go:31] will retry after 521.530681ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:25:41Z" level=error msg="open /run/runc: no such file or directory"
	I1210 06:25:42.043662  349310 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:25:42.059432  349310 pause.go:52] kubelet running: false
	I1210 06:25:42.059513  349310 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1210 06:25:42.188235  349310 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1210 06:25:42.188331  349310 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1210 06:25:42.261208  349310 cri.go:89] found id: "31e4cd08f91ffa6e63d872d64de0a144e7b7de25fe0dcf9dcda8fd9394deeeb7"
	I1210 06:25:42.261229  349310 cri.go:89] found id: "4c8d653511396a1d9eae8851e8d4ea46706940e86eccaaa3b8b1c0e6b5f5805d"
	I1210 06:25:42.261236  349310 cri.go:89] found id: "2a2db7437d32a2b904b0d325d8814b054a94b5f466e98eaa0b90cde7bfed80c0"
	I1210 06:25:42.261240  349310 cri.go:89] found id: "9503f5a9aae53addbfb52e5d4088bf4caff61cd80df691ee52d82c0aae7e9a7c"
	I1210 06:25:42.261244  349310 cri.go:89] found id: "9cc3c395184c12a3759801f1587207d9b0431f0494a36ccbf5f56ab01df6ba76"
	I1210 06:25:42.261249  349310 cri.go:89] found id: "d55554d77c312dbafd4b804752687f64bb10aeb9c0ec85e5b2d7595fd1258bf6"
	I1210 06:25:42.261253  349310 cri.go:89] found id: ""
	I1210 06:25:42.261319  349310 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 06:25:42.273744  349310 retry.go:31] will retry after 608.739433ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:25:42Z" level=error msg="open /run/runc: no such file or directory"
	I1210 06:25:42.883642  349310 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:25:42.897254  349310 pause.go:52] kubelet running: false
	I1210 06:25:42.897317  349310 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1210 06:25:43.017028  349310 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1210 06:25:43.017113  349310 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1210 06:25:43.089694  349310 cri.go:89] found id: "31e4cd08f91ffa6e63d872d64de0a144e7b7de25fe0dcf9dcda8fd9394deeeb7"
	I1210 06:25:43.089713  349310 cri.go:89] found id: "4c8d653511396a1d9eae8851e8d4ea46706940e86eccaaa3b8b1c0e6b5f5805d"
	I1210 06:25:43.089716  349310 cri.go:89] found id: "2a2db7437d32a2b904b0d325d8814b054a94b5f466e98eaa0b90cde7bfed80c0"
	I1210 06:25:43.089720  349310 cri.go:89] found id: "9503f5a9aae53addbfb52e5d4088bf4caff61cd80df691ee52d82c0aae7e9a7c"
	I1210 06:25:43.089723  349310 cri.go:89] found id: "9cc3c395184c12a3759801f1587207d9b0431f0494a36ccbf5f56ab01df6ba76"
	I1210 06:25:43.089726  349310 cri.go:89] found id: "d55554d77c312dbafd4b804752687f64bb10aeb9c0ec85e5b2d7595fd1258bf6"
	I1210 06:25:43.089729  349310 cri.go:89] found id: ""
	I1210 06:25:43.089767  349310 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 06:25:43.104326  349310 out.go:203] 
	W1210 06:25:43.105590  349310 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:25:43Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:25:43Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 06:25:43.105609  349310 out.go:285] * 
	* 
	W1210 06:25:43.109599  349310 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 06:25:43.111250  349310 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p newest-cni-126107 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-126107
helpers_test.go:244: (dbg) docker inspect newest-cni-126107:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fd722e851bba0978618dcfb48e2cdc6ab631c49bfe6d429eae657de39ab08647",
	        "Created": "2025-12-10T06:25:04.189215995Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 345744,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T06:25:30.353220857Z",
	            "FinishedAt": "2025-12-10T06:25:29.461038601Z"
	        },
	        "Image": "sha256:9dfcc37acf4d8ed51daae49d651516447e95ced4bb0b0783e8c53cb79a74f008",
	        "ResolvConfPath": "/var/lib/docker/containers/fd722e851bba0978618dcfb48e2cdc6ab631c49bfe6d429eae657de39ab08647/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fd722e851bba0978618dcfb48e2cdc6ab631c49bfe6d429eae657de39ab08647/hostname",
	        "HostsPath": "/var/lib/docker/containers/fd722e851bba0978618dcfb48e2cdc6ab631c49bfe6d429eae657de39ab08647/hosts",
	        "LogPath": "/var/lib/docker/containers/fd722e851bba0978618dcfb48e2cdc6ab631c49bfe6d429eae657de39ab08647/fd722e851bba0978618dcfb48e2cdc6ab631c49bfe6d429eae657de39ab08647-json.log",
	        "Name": "/newest-cni-126107",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "newest-cni-126107:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-126107",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fd722e851bba0978618dcfb48e2cdc6ab631c49bfe6d429eae657de39ab08647",
	                "LowerDir": "/var/lib/docker/overlay2/38e82e185bdd87c0340e37cb6e3e8e9f3f15eb550f0a30b8c8f391422bf5066f-init/diff:/var/lib/docker/overlay2/5745aee6e8b05b3a4cc4ad6aee891df9d6438d830895f70bd2a764a976802708/diff",
	                "MergedDir": "/var/lib/docker/overlay2/38e82e185bdd87c0340e37cb6e3e8e9f3f15eb550f0a30b8c8f391422bf5066f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/38e82e185bdd87c0340e37cb6e3e8e9f3f15eb550f0a30b8c8f391422bf5066f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/38e82e185bdd87c0340e37cb6e3e8e9f3f15eb550f0a30b8c8f391422bf5066f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-126107",
	                "Source": "/var/lib/docker/volumes/newest-cni-126107/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-126107",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-126107",
	                "name.minikube.sigs.k8s.io": "newest-cni-126107",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "26c73795111b1a1e1cc9cd9d3d8cb35a49de63475ca43fa6bc3afa3cb4c31e42",
	            "SandboxKey": "/var/run/docker/netns/26c73795111b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-126107": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fb43db5713696641e964d1432fde86d3443ec48d700f0cf8b03518e1f4ba75f2",
	                    "EndpointID": "433860311d732e13494c8c0c312f95ffc2028967edb4e36309f5c24296cc2fa8",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "92:25:15:f4:c6:81",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-126107",
	                        "fd722e851bba"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-126107 -n newest-cni-126107
E1210 06:25:43.184747   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/auto-201263/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-126107 -n newest-cni-126107: exit status 2 (323.207463ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-126107 logs -n 25
E1210 06:25:43.826610   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/auto-201263/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ addons  │ enable dashboard -p default-k8s-diff-port-643991 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-643991 │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:24 UTC │
	│ start   │ -p default-k8s-diff-port-643991 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-643991 │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:25 UTC │
	│ image   │ old-k8s-version-424086 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-424086       │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:24 UTC │
	│ pause   │ -p old-k8s-version-424086 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-424086       │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │                     │
	│ delete  │ -p old-k8s-version-424086                                                                                                                                                                                                                            │ old-k8s-version-424086       │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:24 UTC │
	│ delete  │ -p old-k8s-version-424086                                                                                                                                                                                                                            │ old-k8s-version-424086       │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:24 UTC │
	│ start   │ -p newest-cni-126107 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-126107            │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:25 UTC │
	│ image   │ no-preload-713838 image list --format=json                                                                                                                                                                                                           │ no-preload-713838            │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │ 10 Dec 25 06:25 UTC │
	│ pause   │ -p no-preload-713838 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-713838            │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │                     │
	│ delete  │ -p no-preload-713838                                                                                                                                                                                                                                 │ no-preload-713838            │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │ 10 Dec 25 06:25 UTC │
	│ image   │ embed-certs-133470 image list --format=json                                                                                                                                                                                                          │ embed-certs-133470           │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │ 10 Dec 25 06:25 UTC │
	│ pause   │ -p embed-certs-133470 --alsologtostderr -v=1                                                                                                                                                                                                         │ embed-certs-133470           │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │                     │
	│ delete  │ -p no-preload-713838                                                                                                                                                                                                                                 │ no-preload-713838            │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │ 10 Dec 25 06:25 UTC │
	│ addons  │ enable metrics-server -p newest-cni-126107 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-126107            │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │                     │
	│ delete  │ -p embed-certs-133470                                                                                                                                                                                                                                │ embed-certs-133470           │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │ 10 Dec 25 06:25 UTC │
	│ stop    │ -p newest-cni-126107 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-126107            │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │ 10 Dec 25 06:25 UTC │
	│ delete  │ -p embed-certs-133470                                                                                                                                                                                                                                │ embed-certs-133470           │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │ 10 Dec 25 06:25 UTC │
	│ addons  │ enable dashboard -p newest-cni-126107 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-126107            │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │ 10 Dec 25 06:25 UTC │
	│ start   │ -p newest-cni-126107 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-126107            │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │ 10 Dec 25 06:25 UTC │
	│ image   │ default-k8s-diff-port-643991 image list --format=json                                                                                                                                                                                                │ default-k8s-diff-port-643991 │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │ 10 Dec 25 06:25 UTC │
	│ pause   │ -p default-k8s-diff-port-643991 --alsologtostderr -v=1                                                                                                                                                                                               │ default-k8s-diff-port-643991 │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-643991                                                                                                                                                                                                                      │ default-k8s-diff-port-643991 │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │ 10 Dec 25 06:25 UTC │
	│ image   │ newest-cni-126107 image list --format=json                                                                                                                                                                                                           │ newest-cni-126107            │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │ 10 Dec 25 06:25 UTC │
	│ pause   │ -p newest-cni-126107 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-126107            │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-643991                                                                                                                                                                                                                      │ default-k8s-diff-port-643991 │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │ 10 Dec 25 06:25 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:25:30
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:25:30.109871  345537 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:25:30.110102  345537 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:25:30.110112  345537 out.go:374] Setting ErrFile to fd 2...
	I1210 06:25:30.110116  345537 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:25:30.110304  345537 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8832/.minikube/bin
	I1210 06:25:30.110756  345537 out.go:368] Setting JSON to false
	I1210 06:25:30.111768  345537 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4081,"bootTime":1765343849,"procs":254,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 06:25:30.111827  345537 start.go:143] virtualization: kvm guest
	I1210 06:25:30.113927  345537 out.go:179] * [newest-cni-126107] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 06:25:30.115752  345537 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 06:25:30.115752  345537 notify.go:221] Checking for updates...
	I1210 06:25:30.118522  345537 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:25:30.119763  345537 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-8832/kubeconfig
	I1210 06:25:30.121229  345537 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-8832/.minikube
	I1210 06:25:30.122829  345537 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 06:25:30.124211  345537 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:25:30.126209  345537 config.go:182] Loaded profile config "newest-cni-126107": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 06:25:30.126830  345537 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:25:30.151836  345537 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 06:25:30.151928  345537 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:25:30.210924  345537 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:false NGoroutines:54 SystemTime:2025-12-10 06:25:30.200725078 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:25:30.211055  345537 docker.go:319] overlay module found
	I1210 06:25:30.212977  345537 out.go:179] * Using the docker driver based on existing profile
	I1210 06:25:30.214243  345537 start.go:309] selected driver: docker
	I1210 06:25:30.214258  345537 start.go:927] validating driver "docker" against &{Name:newest-cni-126107 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-126107 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:25:30.214369  345537 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:25:30.215062  345537 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:25:30.276019  345537 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:false NGoroutines:54 SystemTime:2025-12-10 06:25:30.266281878 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:25:30.276342  345537 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1210 06:25:30.276370  345537 cni.go:84] Creating CNI manager for ""
	I1210 06:25:30.276425  345537 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:25:30.276460  345537 start.go:353] cluster config:
	{Name:newest-cni-126107 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-126107 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:25:30.278593  345537 out.go:179] * Starting "newest-cni-126107" primary control-plane node in "newest-cni-126107" cluster
	I1210 06:25:30.279972  345537 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 06:25:30.281412  345537 out.go:179] * Pulling base image v0.0.48-1765319469-22089 ...
	I1210 06:25:30.282704  345537 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1210 06:25:30.282744  345537 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-8832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1210 06:25:30.282754  345537 cache.go:65] Caching tarball of preloaded images
	I1210 06:25:30.282808  345537 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon
	I1210 06:25:30.282846  345537 preload.go:238] Found /home/jenkins/minikube-integration/22089-8832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 06:25:30.282857  345537 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1210 06:25:30.282949  345537 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/config.json ...
	I1210 06:25:30.303637  345537 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon, skipping pull
	I1210 06:25:30.303656  345537 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca exists in daemon, skipping load
	I1210 06:25:30.303670  345537 cache.go:243] Successfully downloaded all kic artifacts
	I1210 06:25:30.303700  345537 start.go:360] acquireMachinesLock for newest-cni-126107: {Name:mk95835e60131d01841dcfa433d5776bf10a491c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:25:30.303753  345537 start.go:364] duration metric: took 36.893µs to acquireMachinesLock for "newest-cni-126107"
	I1210 06:25:30.303770  345537 start.go:96] Skipping create...Using existing machine configuration
	I1210 06:25:30.303776  345537 fix.go:54] fixHost starting: 
	I1210 06:25:30.303978  345537 cli_runner.go:164] Run: docker container inspect newest-cni-126107 --format={{.State.Status}}
	I1210 06:25:30.322589  345537 fix.go:112] recreateIfNeeded on newest-cni-126107: state=Stopped err=<nil>
	W1210 06:25:30.322625  345537 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 06:25:30.324708  345537 out.go:252] * Restarting existing docker container for "newest-cni-126107" ...
	I1210 06:25:30.324786  345537 cli_runner.go:164] Run: docker start newest-cni-126107
	I1210 06:25:30.586048  345537 cli_runner.go:164] Run: docker container inspect newest-cni-126107 --format={{.State.Status}}
	I1210 06:25:30.606349  345537 kic.go:430] container "newest-cni-126107" state is running.
	I1210 06:25:30.606765  345537 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-126107
	I1210 06:25:30.626578  345537 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/config.json ...
	I1210 06:25:30.626856  345537 machine.go:94] provisionDockerMachine start ...
	I1210 06:25:30.626926  345537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:30.645878  345537 main.go:143] libmachine: Using SSH client type: native
	I1210 06:25:30.646136  345537 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I1210 06:25:30.646149  345537 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 06:25:30.646758  345537 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53190->127.0.0.1:33139: read: connection reset by peer
	I1210 06:25:33.780525  345537 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-126107
	
	I1210 06:25:33.780558  345537 ubuntu.go:182] provisioning hostname "newest-cni-126107"
	I1210 06:25:33.780660  345537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:33.800442  345537 main.go:143] libmachine: Using SSH client type: native
	I1210 06:25:33.800684  345537 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I1210 06:25:33.800700  345537 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-126107 && echo "newest-cni-126107" | sudo tee /etc/hostname
	I1210 06:25:33.947960  345537 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-126107
	
	I1210 06:25:33.948061  345537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:33.968186  345537 main.go:143] libmachine: Using SSH client type: native
	I1210 06:25:33.968388  345537 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I1210 06:25:33.968404  345537 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-126107' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-126107/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-126107' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 06:25:34.105319  345537 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:25:34.105346  345537 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22089-8832/.minikube CaCertPath:/home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22089-8832/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22089-8832/.minikube}
	I1210 06:25:34.105372  345537 ubuntu.go:190] setting up certificates
	I1210 06:25:34.105382  345537 provision.go:84] configureAuth start
	I1210 06:25:34.105437  345537 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-126107
	I1210 06:25:34.124550  345537 provision.go:143] copyHostCerts
	I1210 06:25:34.124621  345537 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-8832/.minikube/ca.pem, removing ...
	I1210 06:25:34.124635  345537 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-8832/.minikube/ca.pem
	I1210 06:25:34.124709  345537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22089-8832/.minikube/ca.pem (1078 bytes)
	I1210 06:25:34.124824  345537 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-8832/.minikube/cert.pem, removing ...
	I1210 06:25:34.124833  345537 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-8832/.minikube/cert.pem
	I1210 06:25:34.124860  345537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22089-8832/.minikube/cert.pem (1123 bytes)
	I1210 06:25:34.124930  345537 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-8832/.minikube/key.pem, removing ...
	I1210 06:25:34.124937  345537 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-8832/.minikube/key.pem
	I1210 06:25:34.124961  345537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22089-8832/.minikube/key.pem (1675 bytes)
	I1210 06:25:34.125025  345537 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22089-8832/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca-key.pem org=jenkins.newest-cni-126107 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-126107]
	I1210 06:25:34.193303  345537 provision.go:177] copyRemoteCerts
	I1210 06:25:34.193367  345537 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 06:25:34.193402  345537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:34.212955  345537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/newest-cni-126107/id_rsa Username:docker}
	I1210 06:25:34.311230  345537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 06:25:34.330510  345537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 06:25:34.354851  345537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 06:25:34.373898  345537 provision.go:87] duration metric: took 268.50473ms to configureAuth
	I1210 06:25:34.373925  345537 ubuntu.go:206] setting minikube options for container-runtime
	I1210 06:25:34.374105  345537 config.go:182] Loaded profile config "newest-cni-126107": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 06:25:34.374216  345537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:34.394026  345537 main.go:143] libmachine: Using SSH client type: native
	I1210 06:25:34.394302  345537 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I1210 06:25:34.394331  345537 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 06:25:34.690980  345537 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 06:25:34.691015  345537 machine.go:97] duration metric: took 4.064140388s to provisionDockerMachine
	I1210 06:25:34.691029  345537 start.go:293] postStartSetup for "newest-cni-126107" (driver="docker")
	I1210 06:25:34.691080  345537 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 06:25:34.691147  345537 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 06:25:34.691183  345537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:34.710980  345537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/newest-cni-126107/id_rsa Username:docker}
	I1210 06:25:34.809193  345537 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 06:25:34.813269  345537 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 06:25:34.813313  345537 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 06:25:34.813327  345537 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-8832/.minikube/addons for local assets ...
	I1210 06:25:34.813383  345537 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-8832/.minikube/files for local assets ...
	I1210 06:25:34.813505  345537 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-8832/.minikube/files/etc/ssl/certs/123742.pem -> 123742.pem in /etc/ssl/certs
	I1210 06:25:34.813619  345537 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 06:25:34.821859  345537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/files/etc/ssl/certs/123742.pem --> /etc/ssl/certs/123742.pem (1708 bytes)
	I1210 06:25:34.840261  345537 start.go:296] duration metric: took 149.200393ms for postStartSetup
	I1210 06:25:34.840342  345537 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:25:34.840397  345537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:34.859669  345537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/newest-cni-126107/id_rsa Username:docker}
	I1210 06:25:34.953162  345537 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 06:25:34.958380  345537 fix.go:56] duration metric: took 4.654596627s for fixHost
	I1210 06:25:34.958415  345537 start.go:83] releasing machines lock for "newest-cni-126107", held for 4.654651631s
	I1210 06:25:34.958495  345537 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-126107
	I1210 06:25:34.980057  345537 ssh_runner.go:195] Run: cat /version.json
	I1210 06:25:34.980079  345537 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 06:25:34.980145  345537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:34.980146  345537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:35.002231  345537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/newest-cni-126107/id_rsa Username:docker}
	I1210 06:25:35.002423  345537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/newest-cni-126107/id_rsa Username:docker}
	I1210 06:25:35.094904  345537 ssh_runner.go:195] Run: systemctl --version
	I1210 06:25:35.153916  345537 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 06:25:35.191258  345537 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 06:25:35.196136  345537 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 06:25:35.196197  345537 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 06:25:35.204676  345537 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 06:25:35.204704  345537 start.go:496] detecting cgroup driver to use...
	I1210 06:25:35.204735  345537 detect.go:190] detected "systemd" cgroup driver on host os
	I1210 06:25:35.204795  345537 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 06:25:35.220331  345537 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 06:25:35.233476  345537 docker.go:218] disabling cri-docker service (if available) ...
	I1210 06:25:35.233536  345537 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 06:25:35.248932  345537 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 06:25:35.263006  345537 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 06:25:35.344446  345537 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 06:25:35.426091  345537 docker.go:234] disabling docker service ...
	I1210 06:25:35.426167  345537 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 06:25:35.440762  345537 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 06:25:35.453694  345537 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 06:25:35.544590  345537 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 06:25:35.623824  345537 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 06:25:35.636961  345537 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:25:35.651831  345537 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 06:25:35.651879  345537 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:25:35.661164  345537 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1210 06:25:35.661233  345537 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:25:35.670965  345537 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:25:35.681369  345537 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:25:35.691670  345537 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 06:25:35.702453  345537 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:25:35.712207  345537 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:25:35.722297  345537 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:25:35.731324  345537 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 06:25:35.740103  345537 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 06:25:35.748317  345537 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:25:35.839721  345537 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 06:25:35.979010  345537 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 06:25:35.979076  345537 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 06:25:35.983132  345537 start.go:564] Will wait 60s for crictl version
	I1210 06:25:35.983199  345537 ssh_runner.go:195] Run: which crictl
	I1210 06:25:35.986794  345537 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 06:25:36.012672  345537 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 06:25:36.012774  345537 ssh_runner.go:195] Run: crio --version
	I1210 06:25:36.047570  345537 ssh_runner.go:195] Run: crio --version
	I1210 06:25:36.081122  345537 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1210 06:25:36.085692  345537 cli_runner.go:164] Run: docker network inspect newest-cni-126107 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:25:36.104980  345537 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1210 06:25:36.109299  345537 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:25:36.123029  345537 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1210 06:25:36.124561  345537 kubeadm.go:884] updating cluster {Name:newest-cni-126107 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-126107 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 06:25:36.124698  345537 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1210 06:25:36.124754  345537 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:25:36.163641  345537 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 06:25:36.163668  345537 crio.go:433] Images already preloaded, skipping extraction
	I1210 06:25:36.163725  345537 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:25:36.192283  345537 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 06:25:36.192308  345537 cache_images.go:86] Images are preloaded, skipping loading
	I1210 06:25:36.192319  345537 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 crio true true} ...
	I1210 06:25:36.192485  345537 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-126107 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-126107 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 06:25:36.192577  345537 ssh_runner.go:195] Run: crio config
	I1210 06:25:36.242010  345537 cni.go:84] Creating CNI manager for ""
	I1210 06:25:36.242038  345537 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:25:36.242057  345537 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1210 06:25:36.242093  345537 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-126107 NodeName:newest-cni-126107 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 06:25:36.242249  345537 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-126107"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 06:25:36.242323  345537 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1210 06:25:36.252671  345537 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 06:25:36.252732  345537 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 06:25:36.263066  345537 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1210 06:25:36.278849  345537 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1210 06:25:36.292835  345537 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1210 06:25:36.307500  345537 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1210 06:25:36.311352  345537 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:25:36.322425  345537 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:25:36.407644  345537 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:25:36.428133  345537 certs.go:69] Setting up /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107 for IP: 192.168.85.2
	I1210 06:25:36.428155  345537 certs.go:195] generating shared ca certs ...
	I1210 06:25:36.428176  345537 certs.go:227] acquiring lock for ca certs: {Name:mkfe434cecfa5233603e8d01fb39a21abb4f8ca4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:25:36.428342  345537 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22089-8832/.minikube/ca.key
	I1210 06:25:36.428400  345537 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22089-8832/.minikube/proxy-client-ca.key
	I1210 06:25:36.428414  345537 certs.go:257] generating profile certs ...
	I1210 06:25:36.428543  345537 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/client.key
	I1210 06:25:36.428653  345537 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/apiserver.key.23b909bf
	I1210 06:25:36.428711  345537 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/proxy-client.key
	I1210 06:25:36.428855  345537 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/12374.pem (1338 bytes)
	W1210 06:25:36.428888  345537 certs.go:480] ignoring /home/jenkins/minikube-integration/22089-8832/.minikube/certs/12374_empty.pem, impossibly tiny 0 bytes
	I1210 06:25:36.428900  345537 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca-key.pem (1679 bytes)
	I1210 06:25:36.428925  345537 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem (1078 bytes)
	I1210 06:25:36.428958  345537 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/cert.pem (1123 bytes)
	I1210 06:25:36.428996  345537 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/key.pem (1675 bytes)
	I1210 06:25:36.429054  345537 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/files/etc/ssl/certs/123742.pem (1708 bytes)
	I1210 06:25:36.429757  345537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 06:25:36.450791  345537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 06:25:36.473953  345537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 06:25:36.495582  345537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 06:25:36.521273  345537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 06:25:36.544440  345537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 06:25:36.563566  345537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 06:25:36.583534  345537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 06:25:36.604712  345537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/files/etc/ssl/certs/123742.pem --> /usr/share/ca-certificates/123742.pem (1708 bytes)
	I1210 06:25:36.622925  345537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 06:25:36.644601  345537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/certs/12374.pem --> /usr/share/ca-certificates/12374.pem (1338 bytes)
	I1210 06:25:36.663142  345537 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 06:25:36.677349  345537 ssh_runner.go:195] Run: openssl version
	I1210 06:25:36.683704  345537 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/123742.pem
	I1210 06:25:36.691269  345537 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/123742.pem /etc/ssl/certs/123742.pem
	I1210 06:25:36.699881  345537 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/123742.pem
	I1210 06:25:36.704542  345537 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:52 /usr/share/ca-certificates/123742.pem
	I1210 06:25:36.704607  345537 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/123742.pem
	I1210 06:25:36.741885  345537 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 06:25:36.749752  345537 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:25:36.758272  345537 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 06:25:36.768438  345537 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:25:36.772964  345537 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:25:36.773015  345537 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:25:36.810995  345537 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 06:25:36.818904  345537 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12374.pem
	I1210 06:25:36.827591  345537 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12374.pem /etc/ssl/certs/12374.pem
	I1210 06:25:36.836196  345537 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12374.pem
	I1210 06:25:36.840276  345537 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:52 /usr/share/ca-certificates/12374.pem
	I1210 06:25:36.840333  345537 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12374.pem
	I1210 06:25:36.880057  345537 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 06:25:36.893799  345537 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:25:36.899891  345537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 06:25:36.939598  345537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 06:25:36.986565  345537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 06:25:37.033737  345537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 06:25:37.088093  345537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 06:25:37.139249  345537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 06:25:37.179916  345537 kubeadm.go:401] StartCluster: {Name:newest-cni-126107 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-126107 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:25:37.180037  345537 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 06:25:37.180128  345537 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:25:37.214651  345537 cri.go:89] found id: "2a2db7437d32a2b904b0d325d8814b054a94b5f466e98eaa0b90cde7bfed80c0"
	I1210 06:25:37.214678  345537 cri.go:89] found id: "9503f5a9aae53addbfb52e5d4088bf4caff61cd80df691ee52d82c0aae7e9a7c"
	I1210 06:25:37.214685  345537 cri.go:89] found id: "9cc3c395184c12a3759801f1587207d9b0431f0494a36ccbf5f56ab01df6ba76"
	I1210 06:25:37.214691  345537 cri.go:89] found id: "d55554d77c312dbafd4b804752687f64bb10aeb9c0ec85e5b2d7595fd1258bf6"
	I1210 06:25:37.214695  345537 cri.go:89] found id: ""
	I1210 06:25:37.214743  345537 ssh_runner.go:195] Run: sudo runc list -f json
	W1210 06:25:37.228831  345537 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:25:37Z" level=error msg="open /run/runc: no such file or directory"
	I1210 06:25:37.228906  345537 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 06:25:37.239408  345537 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 06:25:37.239432  345537 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 06:25:37.239500  345537 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 06:25:37.248945  345537 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:25:37.249662  345537 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-126107" does not appear in /home/jenkins/minikube-integration/22089-8832/kubeconfig
	I1210 06:25:37.249998  345537 kubeconfig.go:62] /home/jenkins/minikube-integration/22089-8832/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-126107" cluster setting kubeconfig missing "newest-cni-126107" context setting]
	I1210 06:25:37.250682  345537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/kubeconfig: {Name:mk2d0febd8c6a30a71f02d20e2057fd6d147cd76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:25:37.252543  345537 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 06:25:37.262369  345537 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1210 06:25:37.262409  345537 kubeadm.go:602] duration metric: took 22.970004ms to restartPrimaryControlPlane
	I1210 06:25:37.262426  345537 kubeadm.go:403] duration metric: took 82.529817ms to StartCluster
	I1210 06:25:37.262445  345537 settings.go:142] acquiring lock: {Name:mkcfa52e2e09cf8266d26c2d1d1f162454a79515 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:25:37.262545  345537 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22089-8832/kubeconfig
	I1210 06:25:37.263655  345537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/kubeconfig: {Name:mk2d0febd8c6a30a71f02d20e2057fd6d147cd76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:25:37.263975  345537 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 06:25:37.264116  345537 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 06:25:37.264214  345537 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-126107"
	I1210 06:25:37.264218  345537 config.go:182] Loaded profile config "newest-cni-126107": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 06:25:37.264232  345537 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-126107"
	W1210 06:25:37.264245  345537 addons.go:248] addon storage-provisioner should already be in state true
	I1210 06:25:37.264239  345537 addons.go:70] Setting dashboard=true in profile "newest-cni-126107"
	I1210 06:25:37.264258  345537 addons.go:239] Setting addon dashboard=true in "newest-cni-126107"
	I1210 06:25:37.264262  345537 addons.go:70] Setting default-storageclass=true in profile "newest-cni-126107"
	W1210 06:25:37.264266  345537 addons.go:248] addon dashboard should already be in state true
	I1210 06:25:37.264276  345537 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-126107"
	I1210 06:25:37.264289  345537 host.go:66] Checking if "newest-cni-126107" exists ...
	I1210 06:25:37.264276  345537 host.go:66] Checking if "newest-cni-126107" exists ...
	I1210 06:25:37.264613  345537 cli_runner.go:164] Run: docker container inspect newest-cni-126107 --format={{.State.Status}}
	I1210 06:25:37.264788  345537 cli_runner.go:164] Run: docker container inspect newest-cni-126107 --format={{.State.Status}}
	I1210 06:25:37.264788  345537 cli_runner.go:164] Run: docker container inspect newest-cni-126107 --format={{.State.Status}}
	I1210 06:25:37.267674  345537 out.go:179] * Verifying Kubernetes components...
	I1210 06:25:37.269553  345537 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:25:37.291399  345537 addons.go:239] Setting addon default-storageclass=true in "newest-cni-126107"
	W1210 06:25:37.291482  345537 addons.go:248] addon default-storageclass should already be in state true
	I1210 06:25:37.291526  345537 host.go:66] Checking if "newest-cni-126107" exists ...
	I1210 06:25:37.292024  345537 cli_runner.go:164] Run: docker container inspect newest-cni-126107 --format={{.State.Status}}
	I1210 06:25:37.293251  345537 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:25:37.294720  345537 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:25:37.294739  345537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 06:25:37.294792  345537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:37.294941  345537 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1210 06:25:37.297597  345537 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1210 06:25:37.298850  345537 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1210 06:25:37.298869  345537 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1210 06:25:37.298930  345537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:37.324215  345537 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 06:25:37.324416  345537 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 06:25:37.326155  345537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:37.339077  345537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/newest-cni-126107/id_rsa Username:docker}
	I1210 06:25:37.343861  345537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/newest-cni-126107/id_rsa Username:docker}
	I1210 06:25:37.361713  345537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/newest-cni-126107/id_rsa Username:docker}
	I1210 06:25:37.441332  345537 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:25:37.458881  345537 api_server.go:52] waiting for apiserver process to appear ...
	I1210 06:25:37.458959  345537 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:25:37.466318  345537 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:25:37.468827  345537 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1210 06:25:37.468849  345537 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1210 06:25:37.476962  345537 api_server.go:72] duration metric: took 212.944608ms to wait for apiserver process to appear ...
	I1210 06:25:37.476988  345537 api_server.go:88] waiting for apiserver healthz status ...
	I1210 06:25:37.477013  345537 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1210 06:25:37.486897  345537 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1210 06:25:37.487100  345537 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1210 06:25:37.489013  345537 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:25:37.508740  345537 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1210 06:25:37.508823  345537 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1210 06:25:37.531937  345537 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1210 06:25:37.531961  345537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1210 06:25:37.555837  345537 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1210 06:25:37.555866  345537 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1210 06:25:37.572595  345537 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1210 06:25:37.572619  345537 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1210 06:25:37.586058  345537 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1210 06:25:37.586085  345537 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1210 06:25:37.599741  345537 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1210 06:25:37.599765  345537 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1210 06:25:37.613383  345537 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 06:25:37.613409  345537 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1210 06:25:37.627992  345537 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 06:25:38.728276  345537 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 06:25:38.728311  345537 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 06:25:38.728328  345537 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1210 06:25:38.736079  345537 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 06:25:38.736114  345537 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 06:25:38.977153  345537 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1210 06:25:38.981719  345537 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 06:25:38.981748  345537 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 06:25:39.304231  345537 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.837876201s)
	I1210 06:25:39.304305  345537 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.81526888s)
	I1210 06:25:39.304518  345537 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.67646888s)
	I1210 06:25:39.306569  345537 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-126107 addons enable metrics-server
	
	I1210 06:25:39.317431  345537 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1210 06:25:39.319032  345537 addons.go:530] duration metric: took 2.054942448s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1210 06:25:39.477432  345537 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1210 06:25:39.482308  345537 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 06:25:39.482344  345537 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 06:25:39.977663  345537 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1210 06:25:39.981988  345537 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1210 06:25:39.983156  345537 api_server.go:141] control plane version: v1.35.0-beta.0
	I1210 06:25:39.983187  345537 api_server.go:131] duration metric: took 2.506191082s to wait for apiserver health ...
	I1210 06:25:39.983200  345537 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 06:25:39.989391  345537 system_pods.go:59] 8 kube-system pods found
	I1210 06:25:39.989430  345537 system_pods.go:61] "coredns-7d764666f9-rsznm" [0ac06f22-e09b-497c-ad77-f09e614de459] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1210 06:25:39.989439  345537 system_pods.go:61] "etcd-newest-cni-126107" [01d020b0-65ef-48ac-a7fc-abd86d760e8f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 06:25:39.989446  345537 system_pods.go:61] "kindnet-xj7td" [3cf83d19-8dae-4734-bdb5-0ce2410f4c99] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1210 06:25:39.989457  345537 system_pods.go:61] "kube-apiserver-newest-cni-126107" [984910c9-c993-4791-9830-55f3632d1af4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 06:25:39.989462  345537 system_pods.go:61] "kube-controller-manager-newest-cni-126107" [a811eae5-9f29-4614-9ab8-22c76a55f3b3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 06:25:39.989482  345537 system_pods.go:61] "kube-proxy-sxc9w" [7bc19225-90f1-4759-bb4f-bc2da959865d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1210 06:25:39.989492  345537 system_pods.go:61] "kube-scheduler-newest-cni-126107" [689e6051-ab4a-4edc-be1d-b6aa4b77b3a4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 06:25:39.989500  345537 system_pods.go:61] "storage-provisioner" [e274ee92-ba8d-446f-a4d8-dd2e9c49ca78] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1210 06:25:39.989515  345537 system_pods.go:74] duration metric: took 6.307433ms to wait for pod list to return data ...
	I1210 06:25:39.989526  345537 default_sa.go:34] waiting for default service account to be created ...
	I1210 06:25:39.992513  345537 default_sa.go:45] found service account: "default"
	I1210 06:25:39.992539  345537 default_sa.go:55] duration metric: took 3.007316ms for default service account to be created ...
	I1210 06:25:39.992562  345537 kubeadm.go:587] duration metric: took 2.728539448s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1210 06:25:39.992579  345537 node_conditions.go:102] verifying NodePressure condition ...
	I1210 06:25:39.994716  345537 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1210 06:25:39.994742  345537 node_conditions.go:123] node cpu capacity is 8
	I1210 06:25:39.994755  345537 node_conditions.go:105] duration metric: took 2.171889ms to run NodePressure ...
	I1210 06:25:39.994765  345537 start.go:242] waiting for startup goroutines ...
	I1210 06:25:39.994772  345537 start.go:247] waiting for cluster config update ...
	I1210 06:25:39.994782  345537 start.go:256] writing updated cluster config ...
	I1210 06:25:39.995033  345537 ssh_runner.go:195] Run: rm -f paused
	I1210 06:25:40.048347  345537 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1210 06:25:40.050410  345537 out.go:179] * Done! kubectl is now configured to use "newest-cni-126107" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 10 06:25:39 newest-cni-126107 crio[521]: time="2025-12-10T06:25:39.815166938Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-sxc9w/POD" id=bb7cf586-9510-4d05-87e2-e50fd9515391 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 06:25:39 newest-cni-126107 crio[521]: time="2025-12-10T06:25:39.815220765Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:25:39 newest-cni-126107 crio[521]: time="2025-12-10T06:25:39.817273041Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 10 06:25:39 newest-cni-126107 crio[521]: time="2025-12-10T06:25:39.818287056Z" level=info msg="Ran pod sandbox e38f460d235ea0848304127ddd2b8109ca812a52d7cbb6575fd6c99c2b0f2699 with infra container: kube-system/kindnet-xj7td/POD" id=baaf3ced-f22b-4e33-81aa-4813dc18cf18 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 06:25:39 newest-cni-126107 crio[521]: time="2025-12-10T06:25:39.818425205Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=bb7cf586-9510-4d05-87e2-e50fd9515391 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 06:25:39 newest-cni-126107 crio[521]: time="2025-12-10T06:25:39.81963345Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=d8f7afad-7059-4857-ac83-85fa0eef2a84 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:25:39 newest-cni-126107 crio[521]: time="2025-12-10T06:25:39.820701774Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 10 06:25:39 newest-cni-126107 crio[521]: time="2025-12-10T06:25:39.821581144Z" level=info msg="Ran pod sandbox 1b17add6060493f78ee6f799a460f2dd77e3d2d98eff4c627ae8fa92eee70192 with infra container: kube-system/kube-proxy-sxc9w/POD" id=bb7cf586-9510-4d05-87e2-e50fd9515391 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 06:25:39 newest-cni-126107 crio[521]: time="2025-12-10T06:25:39.82354788Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=fc484a46-ce18-4a41-acd3-f750d4aa5bd5 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:25:39 newest-cni-126107 crio[521]: time="2025-12-10T06:25:39.824545802Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=33c6c654-9d23-4752-97ea-334b460fc455 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:25:39 newest-cni-126107 crio[521]: time="2025-12-10T06:25:39.824844843Z" level=info msg="Creating container: kube-system/kindnet-xj7td/kindnet-cni" id=673548e5-21fa-4dff-81ce-dd2f9e037993 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:25:39 newest-cni-126107 crio[521]: time="2025-12-10T06:25:39.824951218Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:25:39 newest-cni-126107 crio[521]: time="2025-12-10T06:25:39.825839787Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=5b1718df-4ff9-4b80-b674-9103521a8aef name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:25:39 newest-cni-126107 crio[521]: time="2025-12-10T06:25:39.828607247Z" level=info msg="Creating container: kube-system/kube-proxy-sxc9w/kube-proxy" id=b4fefbf4-c9e4-42fd-b5b7-ce9d64cff55d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:25:39 newest-cni-126107 crio[521]: time="2025-12-10T06:25:39.828735242Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:25:39 newest-cni-126107 crio[521]: time="2025-12-10T06:25:39.829155912Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:25:39 newest-cni-126107 crio[521]: time="2025-12-10T06:25:39.829724125Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:25:39 newest-cni-126107 crio[521]: time="2025-12-10T06:25:39.836720345Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:25:39 newest-cni-126107 crio[521]: time="2025-12-10T06:25:39.837400928Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:25:39 newest-cni-126107 crio[521]: time="2025-12-10T06:25:39.860802685Z" level=info msg="Created container 4c8d653511396a1d9eae8851e8d4ea46706940e86eccaaa3b8b1c0e6b5f5805d: kube-system/kindnet-xj7td/kindnet-cni" id=673548e5-21fa-4dff-81ce-dd2f9e037993 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:25:39 newest-cni-126107 crio[521]: time="2025-12-10T06:25:39.861484601Z" level=info msg="Starting container: 4c8d653511396a1d9eae8851e8d4ea46706940e86eccaaa3b8b1c0e6b5f5805d" id=a2e65d14-7e8b-4611-a1cf-e537774d3c75 name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 06:25:39 newest-cni-126107 crio[521]: time="2025-12-10T06:25:39.863884811Z" level=info msg="Started container" PID=1053 containerID=4c8d653511396a1d9eae8851e8d4ea46706940e86eccaaa3b8b1c0e6b5f5805d description=kube-system/kindnet-xj7td/kindnet-cni id=a2e65d14-7e8b-4611-a1cf-e537774d3c75 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e38f460d235ea0848304127ddd2b8109ca812a52d7cbb6575fd6c99c2b0f2699
	Dec 10 06:25:39 newest-cni-126107 crio[521]: time="2025-12-10T06:25:39.864998629Z" level=info msg="Created container 31e4cd08f91ffa6e63d872d64de0a144e7b7de25fe0dcf9dcda8fd9394deeeb7: kube-system/kube-proxy-sxc9w/kube-proxy" id=b4fefbf4-c9e4-42fd-b5b7-ce9d64cff55d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:25:39 newest-cni-126107 crio[521]: time="2025-12-10T06:25:39.865895105Z" level=info msg="Starting container: 31e4cd08f91ffa6e63d872d64de0a144e7b7de25fe0dcf9dcda8fd9394deeeb7" id=b954b46f-75f9-4091-af02-5cdac4cae4a1 name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 06:25:39 newest-cni-126107 crio[521]: time="2025-12-10T06:25:39.870265782Z" level=info msg="Started container" PID=1054 containerID=31e4cd08f91ffa6e63d872d64de0a144e7b7de25fe0dcf9dcda8fd9394deeeb7 description=kube-system/kube-proxy-sxc9w/kube-proxy id=b954b46f-75f9-4091-af02-5cdac4cae4a1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1b17add6060493f78ee6f799a460f2dd77e3d2d98eff4c627ae8fa92eee70192
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	31e4cd08f91ff       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810   4 seconds ago       Running             kube-proxy                1                   1b17add606049       kube-proxy-sxc9w                            kube-system
	4c8d653511396       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   4 seconds ago       Running             kindnet-cni               1                   e38f460d235ea       kindnet-xj7td                               kube-system
	2a2db7437d32a       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   7 seconds ago       Running             etcd                      1                   84fa657a84fbc       etcd-newest-cni-126107                      kube-system
	9503f5a9aae53       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b   7 seconds ago       Running             kube-apiserver            1                   2cf788f34b8c3       kube-apiserver-newest-cni-126107            kube-system
	9cc3c395184c1       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc   7 seconds ago       Running             kube-controller-manager   1                   7926444c27431       kube-controller-manager-newest-cni-126107   kube-system
	d55554d77c312       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46   7 seconds ago       Running             kube-scheduler            1                   57ad64da54a22       kube-scheduler-newest-cni-126107            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-126107
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-126107
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=edc6abd3c0573b88c7a02dc35aa0b985627fa3e9
	                    minikube.k8s.io/name=newest-cni-126107
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T06_25_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 06:25:16 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	                    node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-126107
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 06:25:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 06:25:38 +0000   Wed, 10 Dec 2025 06:25:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 06:25:38 +0000   Wed, 10 Dec 2025 06:25:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 06:25:38 +0000   Wed, 10 Dec 2025 06:25:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 10 Dec 2025 06:25:38 +0000   Wed, 10 Dec 2025 06:25:14 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-126107
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 0992b7e47f4f804d2f02c3066938a460
	  System UUID:                48dcb149-8660-4400-bf91-b049b5a968fc
	  Boot ID:                    cce7104c-1270-4b6b-af66-b04ce0de633c
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-126107                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         26s
	  kube-system                 kindnet-xj7td                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      21s
	  kube-system                 kube-apiserver-newest-cni-126107             250m (3%)     0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-controller-manager-newest-cni-126107    200m (2%)     0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-proxy-sxc9w                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         21s
	  kube-system                 kube-scheduler-newest-cni-126107             100m (1%)     0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  22s   node-controller  Node newest-cni-126107 event: Registered Node newest-cni-126107 in Controller
	  Normal  RegisteredNode  3s    node-controller  Node newest-cni-126107 event: Registered Node newest-cni-126107 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 67 21 4d 5d 12 08 06
	[Dec10 06:21] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 7e b1 cc cb 4a c1 08 06
	[  +0.000451] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f6 67 21 4d 5d 12 08 06
	[ +47.984386] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 4e cf 84 e2 64 08 06
	[  +1.136322] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000001] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0e cf a5 c8 c4 7c 08 06
	[  +0.000001] ll header: 00000000: ff ff ff ff ff ff 4e 6d f9 62 31 99 08 06
	[Dec10 06:22] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 16 a6 95 b8 c2 8a 08 06
	[ +10.598490] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 42 35 90 e5 6e e9 08 06
	[  +0.000401] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 36 4e cf 84 e2 64 08 06
	[ +28.872835] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a 53 b5 51 38 03 08 06
	[  +0.000413] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4e 6d f9 62 31 99 08 06
	[  +9.820727] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6e c5 0b 85 ba 10 08 06
	[  +0.000485] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 16 a6 95 b8 c2 8a 08 06
	
	
	==> etcd [2a2db7437d32a2b904b0d325d8814b054a94b5f466e98eaa0b90cde7bfed80c0] <==
	{"level":"warn","ts":"2025-12-10T06:25:38.014198Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:25:38.022122Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:25:38.030069Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:25:38.039555Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:25:38.047334Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:25:38.054265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:25:38.061009Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:25:38.068502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:25:38.075609Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:25:38.083740Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:25:38.093603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:25:38.106932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:25:38.115692Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:25:38.130688Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:25:38.137881Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:25:38.144767Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:25:38.152084Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:25:38.158912Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:25:38.176007Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:25:38.184390Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:25:38.200535Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:25:38.207649Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:25:38.215921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:25:38.223114Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:25:38.284328Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43476","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 06:25:44 up  1:08,  0 user,  load average: 4.73, 4.81, 3.08
	Linux newest-cni-126107 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4c8d653511396a1d9eae8851e8d4ea46706940e86eccaaa3b8b1c0e6b5f5805d] <==
	I1210 06:25:40.128994       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1210 06:25:40.129298       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1210 06:25:40.129480       1 main.go:148] setting mtu 1500 for CNI 
	I1210 06:25:40.129503       1 main.go:178] kindnetd IP family: "ipv4"
	I1210 06:25:40.129517       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-10T06:25:40Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1210 06:25:40.330870       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1210 06:25:40.330922       1 controller.go:381] "Waiting for informer caches to sync"
	I1210 06:25:40.330936       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1210 06:25:40.331080       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1210 06:25:40.728196       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1210 06:25:40.728264       1 metrics.go:72] Registering metrics
	I1210 06:25:40.728522       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [9503f5a9aae53addbfb52e5d4088bf4caff61cd80df691ee52d82c0aae7e9a7c] <==
	I1210 06:25:38.794826       1 cache.go:39] Caches are synced for autoregister controller
	I1210 06:25:38.795622       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1210 06:25:38.796048       1 shared_informer.go:377] "Caches are synced"
	I1210 06:25:38.796130       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1210 06:25:38.796151       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1210 06:25:38.796581       1 shared_informer.go:377] "Caches are synced"
	I1210 06:25:38.796626       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1210 06:25:38.800529       1 shared_informer.go:377] "Caches are synced"
	I1210 06:25:38.800555       1 policy_source.go:248] refreshing policies
	I1210 06:25:38.801616       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1210 06:25:38.803237       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1210 06:25:38.826081       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1210 06:25:38.841961       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 06:25:39.093585       1 controller.go:667] quota admission added evaluator for: namespaces
	I1210 06:25:39.125203       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1210 06:25:39.153663       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1210 06:25:39.164152       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1210 06:25:39.172783       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1210 06:25:39.213478       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.207.203"}
	I1210 06:25:39.225939       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.242.94"}
	I1210 06:25:39.697827       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1210 06:25:42.348556       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1210 06:25:42.348595       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1210 06:25:42.397924       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1210 06:25:42.498802       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [9cc3c395184c12a3759801f1587207d9b0431f0494a36ccbf5f56ab01df6ba76] <==
	I1210 06:25:41.951754       1 shared_informer.go:377] "Caches are synced"
	I1210 06:25:41.951743       1 shared_informer.go:377] "Caches are synced"
	I1210 06:25:41.951707       1 shared_informer.go:377] "Caches are synced"
	I1210 06:25:41.951756       1 shared_informer.go:370] "Waiting for caches to sync"
	I1210 06:25:41.952132       1 shared_informer.go:377] "Caches are synced"
	I1210 06:25:41.953015       1 shared_informer.go:377] "Caches are synced"
	I1210 06:25:41.953040       1 shared_informer.go:377] "Caches are synced"
	I1210 06:25:41.953067       1 shared_informer.go:377] "Caches are synced"
	I1210 06:25:41.953084       1 shared_informer.go:377] "Caches are synced"
	I1210 06:25:41.953100       1 shared_informer.go:377] "Caches are synced"
	I1210 06:25:41.953116       1 shared_informer.go:377] "Caches are synced"
	I1210 06:25:41.953158       1 shared_informer.go:377] "Caches are synced"
	I1210 06:25:41.953209       1 shared_informer.go:377] "Caches are synced"
	I1210 06:25:41.953278       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1210 06:25:41.953303       1 shared_informer.go:377] "Caches are synced"
	I1210 06:25:41.953337       1 shared_informer.go:377] "Caches are synced"
	I1210 06:25:41.953352       1 shared_informer.go:377] "Caches are synced"
	I1210 06:25:41.953349       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="newest-cni-126107"
	I1210 06:25:41.953421       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1210 06:25:41.958792       1 shared_informer.go:377] "Caches are synced"
	I1210 06:25:41.959225       1 shared_informer.go:370] "Waiting for caches to sync"
	I1210 06:25:42.053077       1 shared_informer.go:377] "Caches are synced"
	I1210 06:25:42.053100       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1210 06:25:42.053110       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1210 06:25:42.059820       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [31e4cd08f91ffa6e63d872d64de0a144e7b7de25fe0dcf9dcda8fd9394deeeb7] <==
	I1210 06:25:39.922881       1 server_linux.go:53] "Using iptables proxy"
	I1210 06:25:39.983670       1 shared_informer.go:370] "Waiting for caches to sync"
	I1210 06:25:40.084490       1 shared_informer.go:377] "Caches are synced"
	I1210 06:25:40.084557       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1210 06:25:40.084655       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 06:25:40.105446       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1210 06:25:40.105542       1 server_linux.go:136] "Using iptables Proxier"
	I1210 06:25:40.112357       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 06:25:40.112871       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1210 06:25:40.112999       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 06:25:40.116659       1 config.go:200] "Starting service config controller"
	I1210 06:25:40.116714       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 06:25:40.116807       1 config.go:309] "Starting node config controller"
	I1210 06:25:40.116829       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 06:25:40.116852       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 06:25:40.116886       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 06:25:40.116893       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 06:25:40.116908       1 config.go:106] "Starting endpoint slice config controller"
	I1210 06:25:40.116913       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 06:25:40.217784       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1210 06:25:40.217805       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1210 06:25:40.217844       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [d55554d77c312dbafd4b804752687f64bb10aeb9c0ec85e5b2d7595fd1258bf6] <==
	I1210 06:25:37.461963       1 serving.go:386] Generated self-signed cert in-memory
	W1210 06:25:38.741439       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1210 06:25:38.741635       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1210 06:25:38.741701       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1210 06:25:38.741733       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1210 06:25:38.765950       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1210 06:25:38.766055       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 06:25:38.768415       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 06:25:38.768563       1 shared_informer.go:370] "Waiting for caches to sync"
	I1210 06:25:38.768640       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1210 06:25:38.768668       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1210 06:25:38.869766       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 10 06:25:38 newest-cni-126107 kubelet[673]: I1210 06:25:38.821890     673 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Dec 10 06:25:38 newest-cni-126107 kubelet[673]: E1210 06:25:38.825953     673 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-126107\" already exists" pod="kube-system/kube-controller-manager-newest-cni-126107"
	Dec 10 06:25:38 newest-cni-126107 kubelet[673]: I1210 06:25:38.825994     673 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-126107"
	Dec 10 06:25:38 newest-cni-126107 kubelet[673]: E1210 06:25:38.835674     673 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-126107\" already exists" pod="kube-system/kube-scheduler-newest-cni-126107"
	Dec 10 06:25:38 newest-cni-126107 kubelet[673]: I1210 06:25:38.835710     673 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-126107"
	Dec 10 06:25:38 newest-cni-126107 kubelet[673]: E1210 06:25:38.843799     673 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-126107\" already exists" pod="kube-system/etcd-newest-cni-126107"
	Dec 10 06:25:38 newest-cni-126107 kubelet[673]: I1210 06:25:38.843836     673 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-126107"
	Dec 10 06:25:38 newest-cni-126107 kubelet[673]: E1210 06:25:38.852357     673 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-126107\" already exists" pod="kube-system/kube-apiserver-newest-cni-126107"
	Dec 10 06:25:39 newest-cni-126107 kubelet[673]: I1210 06:25:39.503599     673 apiserver.go:52] "Watching apiserver"
	Dec 10 06:25:39 newest-cni-126107 kubelet[673]: E1210 06:25:39.508222     673 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-126107" containerName="kube-controller-manager"
	Dec 10 06:25:39 newest-cni-126107 kubelet[673]: I1210 06:25:39.508695     673 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 10 06:25:39 newest-cni-126107 kubelet[673]: I1210 06:25:39.536954     673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7bc19225-90f1-4759-bb4f-bc2da959865d-xtables-lock\") pod \"kube-proxy-sxc9w\" (UID: \"7bc19225-90f1-4759-bb4f-bc2da959865d\") " pod="kube-system/kube-proxy-sxc9w"
	Dec 10 06:25:39 newest-cni-126107 kubelet[673]: I1210 06:25:39.537010     673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/3cf83d19-8dae-4734-bdb5-0ce2410f4c99-cni-cfg\") pod \"kindnet-xj7td\" (UID: \"3cf83d19-8dae-4734-bdb5-0ce2410f4c99\") " pod="kube-system/kindnet-xj7td"
	Dec 10 06:25:39 newest-cni-126107 kubelet[673]: I1210 06:25:39.537070     673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3cf83d19-8dae-4734-bdb5-0ce2410f4c99-lib-modules\") pod \"kindnet-xj7td\" (UID: \"3cf83d19-8dae-4734-bdb5-0ce2410f4c99\") " pod="kube-system/kindnet-xj7td"
	Dec 10 06:25:39 newest-cni-126107 kubelet[673]: I1210 06:25:39.537103     673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7bc19225-90f1-4759-bb4f-bc2da959865d-lib-modules\") pod \"kube-proxy-sxc9w\" (UID: \"7bc19225-90f1-4759-bb4f-bc2da959865d\") " pod="kube-system/kube-proxy-sxc9w"
	Dec 10 06:25:39 newest-cni-126107 kubelet[673]: I1210 06:25:39.537132     673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3cf83d19-8dae-4734-bdb5-0ce2410f4c99-xtables-lock\") pod \"kindnet-xj7td\" (UID: \"3cf83d19-8dae-4734-bdb5-0ce2410f4c99\") " pod="kube-system/kindnet-xj7td"
	Dec 10 06:25:39 newest-cni-126107 kubelet[673]: I1210 06:25:39.561117     673 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-126107"
	Dec 10 06:25:39 newest-cni-126107 kubelet[673]: E1210 06:25:39.561384     673 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-126107" containerName="kube-apiserver"
	Dec 10 06:25:39 newest-cni-126107 kubelet[673]: E1210 06:25:39.561878     673 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-126107" containerName="kube-scheduler"
	Dec 10 06:25:39 newest-cni-126107 kubelet[673]: E1210 06:25:39.568586     673 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-126107\" already exists" pod="kube-system/etcd-newest-cni-126107"
	Dec 10 06:25:39 newest-cni-126107 kubelet[673]: E1210 06:25:39.568701     673 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-126107" containerName="etcd"
	Dec 10 06:25:40 newest-cni-126107 kubelet[673]: E1210 06:25:40.569292     673 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-126107" containerName="etcd"
	Dec 10 06:25:41 newest-cni-126107 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 10 06:25:41 newest-cni-126107 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 10 06:25:41 newest-cni-126107 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-126107 -n newest-cni-126107
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-126107 -n newest-cni-126107: exit status 2 (328.880931ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-126107 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-rsznm storage-provisioner dashboard-metrics-scraper-867fb5f87b-zgrz2 kubernetes-dashboard-b84665fb8-wg4gj
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-126107 describe pod coredns-7d764666f9-rsznm storage-provisioner dashboard-metrics-scraper-867fb5f87b-zgrz2 kubernetes-dashboard-b84665fb8-wg4gj
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-126107 describe pod coredns-7d764666f9-rsznm storage-provisioner dashboard-metrics-scraper-867fb5f87b-zgrz2 kubernetes-dashboard-b84665fb8-wg4gj: exit status 1 (61.563464ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-rsznm" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-zgrz2" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-wg4gj" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-126107 describe pod coredns-7d764666f9-rsznm storage-provisioner dashboard-metrics-scraper-867fb5f87b-zgrz2 kubernetes-dashboard-b84665fb8-wg4gj: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-126107
helpers_test.go:244: (dbg) docker inspect newest-cni-126107:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fd722e851bba0978618dcfb48e2cdc6ab631c49bfe6d429eae657de39ab08647",
	        "Created": "2025-12-10T06:25:04.189215995Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 345744,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T06:25:30.353220857Z",
	            "FinishedAt": "2025-12-10T06:25:29.461038601Z"
	        },
	        "Image": "sha256:9dfcc37acf4d8ed51daae49d651516447e95ced4bb0b0783e8c53cb79a74f008",
	        "ResolvConfPath": "/var/lib/docker/containers/fd722e851bba0978618dcfb48e2cdc6ab631c49bfe6d429eae657de39ab08647/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fd722e851bba0978618dcfb48e2cdc6ab631c49bfe6d429eae657de39ab08647/hostname",
	        "HostsPath": "/var/lib/docker/containers/fd722e851bba0978618dcfb48e2cdc6ab631c49bfe6d429eae657de39ab08647/hosts",
	        "LogPath": "/var/lib/docker/containers/fd722e851bba0978618dcfb48e2cdc6ab631c49bfe6d429eae657de39ab08647/fd722e851bba0978618dcfb48e2cdc6ab631c49bfe6d429eae657de39ab08647-json.log",
	        "Name": "/newest-cni-126107",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "newest-cni-126107:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-126107",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fd722e851bba0978618dcfb48e2cdc6ab631c49bfe6d429eae657de39ab08647",
	                "LowerDir": "/var/lib/docker/overlay2/38e82e185bdd87c0340e37cb6e3e8e9f3f15eb550f0a30b8c8f391422bf5066f-init/diff:/var/lib/docker/overlay2/5745aee6e8b05b3a4cc4ad6aee891df9d6438d830895f70bd2a764a976802708/diff",
	                "MergedDir": "/var/lib/docker/overlay2/38e82e185bdd87c0340e37cb6e3e8e9f3f15eb550f0a30b8c8f391422bf5066f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/38e82e185bdd87c0340e37cb6e3e8e9f3f15eb550f0a30b8c8f391422bf5066f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/38e82e185bdd87c0340e37cb6e3e8e9f3f15eb550f0a30b8c8f391422bf5066f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-126107",
	                "Source": "/var/lib/docker/volumes/newest-cni-126107/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-126107",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-126107",
	                "name.minikube.sigs.k8s.io": "newest-cni-126107",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "26c73795111b1a1e1cc9cd9d3d8cb35a49de63475ca43fa6bc3afa3cb4c31e42",
	            "SandboxKey": "/var/run/docker/netns/26c73795111b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-126107": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fb43db5713696641e964d1432fde86d3443ec48d700f0cf8b03518e1f4ba75f2",
	                    "EndpointID": "433860311d732e13494c8c0c312f95ffc2028967edb4e36309f5c24296cc2fa8",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "92:25:15:f4:c6:81",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-126107",
	                        "fd722e851bba"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-126107 -n newest-cni-126107
E1210 06:25:45.108504   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/auto-201263/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-126107 -n newest-cni-126107: exit status 2 (321.303798ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-126107 logs -n 25
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ addons  │ enable dashboard -p default-k8s-diff-port-643991 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-643991 │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:24 UTC │
	│ start   │ -p default-k8s-diff-port-643991 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-643991 │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:25 UTC │
	│ image   │ old-k8s-version-424086 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-424086       │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:24 UTC │
	│ pause   │ -p old-k8s-version-424086 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-424086       │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │                     │
	│ delete  │ -p old-k8s-version-424086                                                                                                                                                                                                                            │ old-k8s-version-424086       │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:24 UTC │
	│ delete  │ -p old-k8s-version-424086                                                                                                                                                                                                                            │ old-k8s-version-424086       │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:24 UTC │
	│ start   │ -p newest-cni-126107 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-126107            │ jenkins │ v1.37.0 │ 10 Dec 25 06:24 UTC │ 10 Dec 25 06:25 UTC │
	│ image   │ no-preload-713838 image list --format=json                                                                                                                                                                                                           │ no-preload-713838            │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │ 10 Dec 25 06:25 UTC │
	│ pause   │ -p no-preload-713838 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-713838            │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │                     │
	│ delete  │ -p no-preload-713838                                                                                                                                                                                                                                 │ no-preload-713838            │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │ 10 Dec 25 06:25 UTC │
	│ image   │ embed-certs-133470 image list --format=json                                                                                                                                                                                                          │ embed-certs-133470           │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │ 10 Dec 25 06:25 UTC │
	│ pause   │ -p embed-certs-133470 --alsologtostderr -v=1                                                                                                                                                                                                         │ embed-certs-133470           │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │                     │
	│ delete  │ -p no-preload-713838                                                                                                                                                                                                                                 │ no-preload-713838            │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │ 10 Dec 25 06:25 UTC │
	│ addons  │ enable metrics-server -p newest-cni-126107 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-126107            │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │                     │
	│ delete  │ -p embed-certs-133470                                                                                                                                                                                                                                │ embed-certs-133470           │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │ 10 Dec 25 06:25 UTC │
	│ stop    │ -p newest-cni-126107 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-126107            │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │ 10 Dec 25 06:25 UTC │
	│ delete  │ -p embed-certs-133470                                                                                                                                                                                                                                │ embed-certs-133470           │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │ 10 Dec 25 06:25 UTC │
	│ addons  │ enable dashboard -p newest-cni-126107 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-126107            │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │ 10 Dec 25 06:25 UTC │
	│ start   │ -p newest-cni-126107 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-126107            │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │ 10 Dec 25 06:25 UTC │
	│ image   │ default-k8s-diff-port-643991 image list --format=json                                                                                                                                                                                                │ default-k8s-diff-port-643991 │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │ 10 Dec 25 06:25 UTC │
	│ pause   │ -p default-k8s-diff-port-643991 --alsologtostderr -v=1                                                                                                                                                                                               │ default-k8s-diff-port-643991 │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-643991                                                                                                                                                                                                                      │ default-k8s-diff-port-643991 │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │ 10 Dec 25 06:25 UTC │
	│ image   │ newest-cni-126107 image list --format=json                                                                                                                                                                                                           │ newest-cni-126107            │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │ 10 Dec 25 06:25 UTC │
	│ pause   │ -p newest-cni-126107 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-126107            │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-643991                                                                                                                                                                                                                      │ default-k8s-diff-port-643991 │ jenkins │ v1.37.0 │ 10 Dec 25 06:25 UTC │ 10 Dec 25 06:25 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:25:30
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:25:30.109871  345537 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:25:30.110102  345537 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:25:30.110112  345537 out.go:374] Setting ErrFile to fd 2...
	I1210 06:25:30.110116  345537 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:25:30.110304  345537 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8832/.minikube/bin
	I1210 06:25:30.110756  345537 out.go:368] Setting JSON to false
	I1210 06:25:30.111768  345537 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4081,"bootTime":1765343849,"procs":254,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 06:25:30.111827  345537 start.go:143] virtualization: kvm guest
	I1210 06:25:30.113927  345537 out.go:179] * [newest-cni-126107] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 06:25:30.115752  345537 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 06:25:30.115752  345537 notify.go:221] Checking for updates...
	I1210 06:25:30.118522  345537 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:25:30.119763  345537 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-8832/kubeconfig
	I1210 06:25:30.121229  345537 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-8832/.minikube
	I1210 06:25:30.122829  345537 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 06:25:30.124211  345537 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:25:30.126209  345537 config.go:182] Loaded profile config "newest-cni-126107": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 06:25:30.126830  345537 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:25:30.151836  345537 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 06:25:30.151928  345537 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:25:30.210924  345537 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:false NGoroutines:54 SystemTime:2025-12-10 06:25:30.200725078 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:25:30.211055  345537 docker.go:319] overlay module found
	I1210 06:25:30.212977  345537 out.go:179] * Using the docker driver based on existing profile
	I1210 06:25:30.214243  345537 start.go:309] selected driver: docker
	I1210 06:25:30.214258  345537 start.go:927] validating driver "docker" against &{Name:newest-cni-126107 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-126107 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:25:30.214369  345537 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:25:30.215062  345537 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:25:30.276019  345537 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:false NGoroutines:54 SystemTime:2025-12-10 06:25:30.266281878 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:25:30.276342  345537 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1210 06:25:30.276370  345537 cni.go:84] Creating CNI manager for ""
	I1210 06:25:30.276425  345537 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:25:30.276460  345537 start.go:353] cluster config:
	{Name:newest-cni-126107 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-126107 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:25:30.278593  345537 out.go:179] * Starting "newest-cni-126107" primary control-plane node in "newest-cni-126107" cluster
	I1210 06:25:30.279972  345537 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 06:25:30.281412  345537 out.go:179] * Pulling base image v0.0.48-1765319469-22089 ...
	I1210 06:25:30.282704  345537 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1210 06:25:30.282744  345537 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-8832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1210 06:25:30.282754  345537 cache.go:65] Caching tarball of preloaded images
	I1210 06:25:30.282808  345537 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon
	I1210 06:25:30.282846  345537 preload.go:238] Found /home/jenkins/minikube-integration/22089-8832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 06:25:30.282857  345537 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1210 06:25:30.282949  345537 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/config.json ...
	I1210 06:25:30.303637  345537 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon, skipping pull
	I1210 06:25:30.303656  345537 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca exists in daemon, skipping load
	I1210 06:25:30.303670  345537 cache.go:243] Successfully downloaded all kic artifacts
	I1210 06:25:30.303700  345537 start.go:360] acquireMachinesLock for newest-cni-126107: {Name:mk95835e60131d01841dcfa433d5776bf10a491c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:25:30.303753  345537 start.go:364] duration metric: took 36.893µs to acquireMachinesLock for "newest-cni-126107"
	I1210 06:25:30.303770  345537 start.go:96] Skipping create...Using existing machine configuration
	I1210 06:25:30.303776  345537 fix.go:54] fixHost starting: 
	I1210 06:25:30.303978  345537 cli_runner.go:164] Run: docker container inspect newest-cni-126107 --format={{.State.Status}}
	I1210 06:25:30.322589  345537 fix.go:112] recreateIfNeeded on newest-cni-126107: state=Stopped err=<nil>
	W1210 06:25:30.322625  345537 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 06:25:30.324708  345537 out.go:252] * Restarting existing docker container for "newest-cni-126107" ...
	I1210 06:25:30.324786  345537 cli_runner.go:164] Run: docker start newest-cni-126107
	I1210 06:25:30.586048  345537 cli_runner.go:164] Run: docker container inspect newest-cni-126107 --format={{.State.Status}}
	I1210 06:25:30.606349  345537 kic.go:430] container "newest-cni-126107" state is running.
	I1210 06:25:30.606765  345537 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-126107
	I1210 06:25:30.626578  345537 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/config.json ...
	I1210 06:25:30.626856  345537 machine.go:94] provisionDockerMachine start ...
	I1210 06:25:30.626926  345537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:30.645878  345537 main.go:143] libmachine: Using SSH client type: native
	I1210 06:25:30.646136  345537 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I1210 06:25:30.646149  345537 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 06:25:30.646758  345537 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53190->127.0.0.1:33139: read: connection reset by peer
	I1210 06:25:33.780525  345537 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-126107
	
	I1210 06:25:33.780558  345537 ubuntu.go:182] provisioning hostname "newest-cni-126107"
	I1210 06:25:33.780660  345537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:33.800442  345537 main.go:143] libmachine: Using SSH client type: native
	I1210 06:25:33.800684  345537 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I1210 06:25:33.800700  345537 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-126107 && echo "newest-cni-126107" | sudo tee /etc/hostname
	I1210 06:25:33.947960  345537 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-126107
	
	I1210 06:25:33.948061  345537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:33.968186  345537 main.go:143] libmachine: Using SSH client type: native
	I1210 06:25:33.968388  345537 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I1210 06:25:33.968404  345537 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-126107' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-126107/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-126107' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 06:25:34.105319  345537 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:25:34.105346  345537 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22089-8832/.minikube CaCertPath:/home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22089-8832/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22089-8832/.minikube}
	I1210 06:25:34.105372  345537 ubuntu.go:190] setting up certificates
	I1210 06:25:34.105382  345537 provision.go:84] configureAuth start
	I1210 06:25:34.105437  345537 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-126107
	I1210 06:25:34.124550  345537 provision.go:143] copyHostCerts
	I1210 06:25:34.124621  345537 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-8832/.minikube/ca.pem, removing ...
	I1210 06:25:34.124635  345537 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-8832/.minikube/ca.pem
	I1210 06:25:34.124709  345537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22089-8832/.minikube/ca.pem (1078 bytes)
	I1210 06:25:34.124824  345537 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-8832/.minikube/cert.pem, removing ...
	I1210 06:25:34.124833  345537 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-8832/.minikube/cert.pem
	I1210 06:25:34.124860  345537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22089-8832/.minikube/cert.pem (1123 bytes)
	I1210 06:25:34.124930  345537 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-8832/.minikube/key.pem, removing ...
	I1210 06:25:34.124937  345537 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-8832/.minikube/key.pem
	I1210 06:25:34.124961  345537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22089-8832/.minikube/key.pem (1675 bytes)
	I1210 06:25:34.125025  345537 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22089-8832/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca-key.pem org=jenkins.newest-cni-126107 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-126107]
	I1210 06:25:34.193303  345537 provision.go:177] copyRemoteCerts
	I1210 06:25:34.193367  345537 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 06:25:34.193402  345537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:34.212955  345537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/newest-cni-126107/id_rsa Username:docker}
	I1210 06:25:34.311230  345537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 06:25:34.330510  345537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 06:25:34.354851  345537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 06:25:34.373898  345537 provision.go:87] duration metric: took 268.50473ms to configureAuth
	I1210 06:25:34.373925  345537 ubuntu.go:206] setting minikube options for container-runtime
	I1210 06:25:34.374105  345537 config.go:182] Loaded profile config "newest-cni-126107": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 06:25:34.374216  345537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:34.394026  345537 main.go:143] libmachine: Using SSH client type: native
	I1210 06:25:34.394302  345537 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I1210 06:25:34.394331  345537 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 06:25:34.690980  345537 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 06:25:34.691015  345537 machine.go:97] duration metric: took 4.064140388s to provisionDockerMachine
	I1210 06:25:34.691029  345537 start.go:293] postStartSetup for "newest-cni-126107" (driver="docker")
	I1210 06:25:34.691080  345537 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 06:25:34.691147  345537 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 06:25:34.691183  345537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:34.710980  345537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/newest-cni-126107/id_rsa Username:docker}
	I1210 06:25:34.809193  345537 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 06:25:34.813269  345537 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 06:25:34.813313  345537 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 06:25:34.813327  345537 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-8832/.minikube/addons for local assets ...
	I1210 06:25:34.813383  345537 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-8832/.minikube/files for local assets ...
	I1210 06:25:34.813505  345537 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-8832/.minikube/files/etc/ssl/certs/123742.pem -> 123742.pem in /etc/ssl/certs
	I1210 06:25:34.813619  345537 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 06:25:34.821859  345537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/files/etc/ssl/certs/123742.pem --> /etc/ssl/certs/123742.pem (1708 bytes)
	I1210 06:25:34.840261  345537 start.go:296] duration metric: took 149.200393ms for postStartSetup
	I1210 06:25:34.840342  345537 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:25:34.840397  345537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:34.859669  345537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/newest-cni-126107/id_rsa Username:docker}
	I1210 06:25:34.953162  345537 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 06:25:34.958380  345537 fix.go:56] duration metric: took 4.654596627s for fixHost
	I1210 06:25:34.958415  345537 start.go:83] releasing machines lock for "newest-cni-126107", held for 4.654651631s
	I1210 06:25:34.958495  345537 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-126107
	I1210 06:25:34.980057  345537 ssh_runner.go:195] Run: cat /version.json
	I1210 06:25:34.980079  345537 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 06:25:34.980145  345537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:34.980146  345537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:35.002231  345537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/newest-cni-126107/id_rsa Username:docker}
	I1210 06:25:35.002423  345537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/newest-cni-126107/id_rsa Username:docker}
	I1210 06:25:35.094904  345537 ssh_runner.go:195] Run: systemctl --version
	I1210 06:25:35.153916  345537 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 06:25:35.191258  345537 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 06:25:35.196136  345537 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 06:25:35.196197  345537 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 06:25:35.204676  345537 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 06:25:35.204704  345537 start.go:496] detecting cgroup driver to use...
	I1210 06:25:35.204735  345537 detect.go:190] detected "systemd" cgroup driver on host os
	I1210 06:25:35.204795  345537 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 06:25:35.220331  345537 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 06:25:35.233476  345537 docker.go:218] disabling cri-docker service (if available) ...
	I1210 06:25:35.233536  345537 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 06:25:35.248932  345537 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 06:25:35.263006  345537 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 06:25:35.344446  345537 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 06:25:35.426091  345537 docker.go:234] disabling docker service ...
	I1210 06:25:35.426167  345537 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 06:25:35.440762  345537 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 06:25:35.453694  345537 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 06:25:35.544590  345537 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 06:25:35.623824  345537 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 06:25:35.636961  345537 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:25:35.651831  345537 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 06:25:35.651879  345537 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:25:35.661164  345537 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1210 06:25:35.661233  345537 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:25:35.670965  345537 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:25:35.681369  345537 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:25:35.691670  345537 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 06:25:35.702453  345537 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:25:35.712207  345537 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:25:35.722297  345537 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 06:25:35.731324  345537 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 06:25:35.740103  345537 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 06:25:35.748317  345537 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:25:35.839721  345537 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 06:25:35.979010  345537 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 06:25:35.979076  345537 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 06:25:35.983132  345537 start.go:564] Will wait 60s for crictl version
	I1210 06:25:35.983199  345537 ssh_runner.go:195] Run: which crictl
	I1210 06:25:35.986794  345537 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 06:25:36.012672  345537 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 06:25:36.012774  345537 ssh_runner.go:195] Run: crio --version
	I1210 06:25:36.047570  345537 ssh_runner.go:195] Run: crio --version
	I1210 06:25:36.081122  345537 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1210 06:25:36.085692  345537 cli_runner.go:164] Run: docker network inspect newest-cni-126107 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 06:25:36.104980  345537 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1210 06:25:36.109299  345537 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:25:36.123029  345537 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1210 06:25:36.124561  345537 kubeadm.go:884] updating cluster {Name:newest-cni-126107 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-126107 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 06:25:36.124698  345537 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1210 06:25:36.124754  345537 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:25:36.163641  345537 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 06:25:36.163668  345537 crio.go:433] Images already preloaded, skipping extraction
	I1210 06:25:36.163725  345537 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 06:25:36.192283  345537 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 06:25:36.192308  345537 cache_images.go:86] Images are preloaded, skipping loading
	I1210 06:25:36.192319  345537 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 crio true true} ...
	I1210 06:25:36.192485  345537 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-126107 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-126107 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 06:25:36.192577  345537 ssh_runner.go:195] Run: crio config
	I1210 06:25:36.242010  345537 cni.go:84] Creating CNI manager for ""
	I1210 06:25:36.242038  345537 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 06:25:36.242057  345537 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1210 06:25:36.242093  345537 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-126107 NodeName:newest-cni-126107 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 06:25:36.242249  345537 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-126107"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 06:25:36.242323  345537 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1210 06:25:36.252671  345537 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 06:25:36.252732  345537 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 06:25:36.263066  345537 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1210 06:25:36.278849  345537 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1210 06:25:36.292835  345537 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1210 06:25:36.307500  345537 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1210 06:25:36.311352  345537 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 06:25:36.322425  345537 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:25:36.407644  345537 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:25:36.428133  345537 certs.go:69] Setting up /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107 for IP: 192.168.85.2
	I1210 06:25:36.428155  345537 certs.go:195] generating shared ca certs ...
	I1210 06:25:36.428176  345537 certs.go:227] acquiring lock for ca certs: {Name:mkfe434cecfa5233603e8d01fb39a21abb4f8ca4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:25:36.428342  345537 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22089-8832/.minikube/ca.key
	I1210 06:25:36.428400  345537 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22089-8832/.minikube/proxy-client-ca.key
	I1210 06:25:36.428414  345537 certs.go:257] generating profile certs ...
	I1210 06:25:36.428543  345537 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/client.key
	I1210 06:25:36.428653  345537 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/apiserver.key.23b909bf
	I1210 06:25:36.428711  345537 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/proxy-client.key
	I1210 06:25:36.428855  345537 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/12374.pem (1338 bytes)
	W1210 06:25:36.428888  345537 certs.go:480] ignoring /home/jenkins/minikube-integration/22089-8832/.minikube/certs/12374_empty.pem, impossibly tiny 0 bytes
	I1210 06:25:36.428900  345537 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca-key.pem (1679 bytes)
	I1210 06:25:36.428925  345537 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/ca.pem (1078 bytes)
	I1210 06:25:36.428958  345537 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/cert.pem (1123 bytes)
	I1210 06:25:36.428996  345537 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/certs/key.pem (1675 bytes)
	I1210 06:25:36.429054  345537 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-8832/.minikube/files/etc/ssl/certs/123742.pem (1708 bytes)
	I1210 06:25:36.429757  345537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 06:25:36.450791  345537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 06:25:36.473953  345537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 06:25:36.495582  345537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 06:25:36.521273  345537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 06:25:36.544440  345537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 06:25:36.563566  345537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 06:25:36.583534  345537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/newest-cni-126107/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 06:25:36.604712  345537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/files/etc/ssl/certs/123742.pem --> /usr/share/ca-certificates/123742.pem (1708 bytes)
	I1210 06:25:36.622925  345537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 06:25:36.644601  345537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-8832/.minikube/certs/12374.pem --> /usr/share/ca-certificates/12374.pem (1338 bytes)
	I1210 06:25:36.663142  345537 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 06:25:36.677349  345537 ssh_runner.go:195] Run: openssl version
	I1210 06:25:36.683704  345537 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/123742.pem
	I1210 06:25:36.691269  345537 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/123742.pem /etc/ssl/certs/123742.pem
	I1210 06:25:36.699881  345537 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/123742.pem
	I1210 06:25:36.704542  345537 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:52 /usr/share/ca-certificates/123742.pem
	I1210 06:25:36.704607  345537 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/123742.pem
	I1210 06:25:36.741885  345537 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 06:25:36.749752  345537 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:25:36.758272  345537 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 06:25:36.768438  345537 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:25:36.772964  345537 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:25:36.773015  345537 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:25:36.810995  345537 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 06:25:36.818904  345537 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12374.pem
	I1210 06:25:36.827591  345537 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12374.pem /etc/ssl/certs/12374.pem
	I1210 06:25:36.836196  345537 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12374.pem
	I1210 06:25:36.840276  345537 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:52 /usr/share/ca-certificates/12374.pem
	I1210 06:25:36.840333  345537 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12374.pem
	I1210 06:25:36.880057  345537 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 06:25:36.893799  345537 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:25:36.899891  345537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 06:25:36.939598  345537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 06:25:36.986565  345537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 06:25:37.033737  345537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 06:25:37.088093  345537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 06:25:37.139249  345537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 06:25:37.179916  345537 kubeadm.go:401] StartCluster: {Name:newest-cni-126107 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-126107 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:25:37.180037  345537 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 06:25:37.180128  345537 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 06:25:37.214651  345537 cri.go:89] found id: "2a2db7437d32a2b904b0d325d8814b054a94b5f466e98eaa0b90cde7bfed80c0"
	I1210 06:25:37.214678  345537 cri.go:89] found id: "9503f5a9aae53addbfb52e5d4088bf4caff61cd80df691ee52d82c0aae7e9a7c"
	I1210 06:25:37.214685  345537 cri.go:89] found id: "9cc3c395184c12a3759801f1587207d9b0431f0494a36ccbf5f56ab01df6ba76"
	I1210 06:25:37.214691  345537 cri.go:89] found id: "d55554d77c312dbafd4b804752687f64bb10aeb9c0ec85e5b2d7595fd1258bf6"
	I1210 06:25:37.214695  345537 cri.go:89] found id: ""
	I1210 06:25:37.214743  345537 ssh_runner.go:195] Run: sudo runc list -f json
	W1210 06:25:37.228831  345537 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T06:25:37Z" level=error msg="open /run/runc: no such file or directory"
	I1210 06:25:37.228906  345537 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 06:25:37.239408  345537 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 06:25:37.239432  345537 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 06:25:37.239500  345537 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 06:25:37.248945  345537 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:25:37.249662  345537 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-126107" does not appear in /home/jenkins/minikube-integration/22089-8832/kubeconfig
	I1210 06:25:37.249998  345537 kubeconfig.go:62] /home/jenkins/minikube-integration/22089-8832/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-126107" cluster setting kubeconfig missing "newest-cni-126107" context setting]
	I1210 06:25:37.250682  345537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/kubeconfig: {Name:mk2d0febd8c6a30a71f02d20e2057fd6d147cd76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:25:37.252543  345537 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 06:25:37.262369  345537 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1210 06:25:37.262409  345537 kubeadm.go:602] duration metric: took 22.970004ms to restartPrimaryControlPlane
	I1210 06:25:37.262426  345537 kubeadm.go:403] duration metric: took 82.529817ms to StartCluster
	I1210 06:25:37.262445  345537 settings.go:142] acquiring lock: {Name:mkcfa52e2e09cf8266d26c2d1d1f162454a79515 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:25:37.262545  345537 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22089-8832/kubeconfig
	I1210 06:25:37.263655  345537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/kubeconfig: {Name:mk2d0febd8c6a30a71f02d20e2057fd6d147cd76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:25:37.263975  345537 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 06:25:37.264116  345537 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 06:25:37.264214  345537 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-126107"
	I1210 06:25:37.264218  345537 config.go:182] Loaded profile config "newest-cni-126107": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 06:25:37.264232  345537 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-126107"
	W1210 06:25:37.264245  345537 addons.go:248] addon storage-provisioner should already be in state true
	I1210 06:25:37.264239  345537 addons.go:70] Setting dashboard=true in profile "newest-cni-126107"
	I1210 06:25:37.264258  345537 addons.go:239] Setting addon dashboard=true in "newest-cni-126107"
	I1210 06:25:37.264262  345537 addons.go:70] Setting default-storageclass=true in profile "newest-cni-126107"
	W1210 06:25:37.264266  345537 addons.go:248] addon dashboard should already be in state true
	I1210 06:25:37.264276  345537 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-126107"
	I1210 06:25:37.264289  345537 host.go:66] Checking if "newest-cni-126107" exists ...
	I1210 06:25:37.264276  345537 host.go:66] Checking if "newest-cni-126107" exists ...
	I1210 06:25:37.264613  345537 cli_runner.go:164] Run: docker container inspect newest-cni-126107 --format={{.State.Status}}
	I1210 06:25:37.264788  345537 cli_runner.go:164] Run: docker container inspect newest-cni-126107 --format={{.State.Status}}
	I1210 06:25:37.264788  345537 cli_runner.go:164] Run: docker container inspect newest-cni-126107 --format={{.State.Status}}
	I1210 06:25:37.267674  345537 out.go:179] * Verifying Kubernetes components...
	I1210 06:25:37.269553  345537 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:25:37.291399  345537 addons.go:239] Setting addon default-storageclass=true in "newest-cni-126107"
	W1210 06:25:37.291482  345537 addons.go:248] addon default-storageclass should already be in state true
	I1210 06:25:37.291526  345537 host.go:66] Checking if "newest-cni-126107" exists ...
	I1210 06:25:37.292024  345537 cli_runner.go:164] Run: docker container inspect newest-cni-126107 --format={{.State.Status}}
	I1210 06:25:37.293251  345537 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 06:25:37.294720  345537 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:25:37.294739  345537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 06:25:37.294792  345537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:37.294941  345537 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1210 06:25:37.297597  345537 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1210 06:25:37.298850  345537 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1210 06:25:37.298869  345537 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1210 06:25:37.298930  345537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:37.324215  345537 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 06:25:37.324416  345537 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 06:25:37.326155  345537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-126107
	I1210 06:25:37.339077  345537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/newest-cni-126107/id_rsa Username:docker}
	I1210 06:25:37.343861  345537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/newest-cni-126107/id_rsa Username:docker}
	I1210 06:25:37.361713  345537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/newest-cni-126107/id_rsa Username:docker}
	I1210 06:25:37.441332  345537 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:25:37.458881  345537 api_server.go:52] waiting for apiserver process to appear ...
	I1210 06:25:37.458959  345537 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:25:37.466318  345537 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 06:25:37.468827  345537 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1210 06:25:37.468849  345537 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1210 06:25:37.476962  345537 api_server.go:72] duration metric: took 212.944608ms to wait for apiserver process to appear ...
	I1210 06:25:37.476988  345537 api_server.go:88] waiting for apiserver healthz status ...
	I1210 06:25:37.477013  345537 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1210 06:25:37.486897  345537 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1210 06:25:37.487100  345537 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1210 06:25:37.489013  345537 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 06:25:37.508740  345537 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1210 06:25:37.508823  345537 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1210 06:25:37.531937  345537 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1210 06:25:37.531961  345537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1210 06:25:37.555837  345537 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1210 06:25:37.555866  345537 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1210 06:25:37.572595  345537 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1210 06:25:37.572619  345537 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1210 06:25:37.586058  345537 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1210 06:25:37.586085  345537 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1210 06:25:37.599741  345537 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1210 06:25:37.599765  345537 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1210 06:25:37.613383  345537 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 06:25:37.613409  345537 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1210 06:25:37.627992  345537 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 06:25:38.728276  345537 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 06:25:38.728311  345537 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 06:25:38.728328  345537 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1210 06:25:38.736079  345537 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 06:25:38.736114  345537 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 06:25:38.977153  345537 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1210 06:25:38.981719  345537 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 06:25:38.981748  345537 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 06:25:39.304231  345537 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.837876201s)
	I1210 06:25:39.304305  345537 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.81526888s)
	I1210 06:25:39.304518  345537 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.67646888s)
	I1210 06:25:39.306569  345537 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-126107 addons enable metrics-server
	
	I1210 06:25:39.317431  345537 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1210 06:25:39.319032  345537 addons.go:530] duration metric: took 2.054942448s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1210 06:25:39.477432  345537 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1210 06:25:39.482308  345537 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 06:25:39.482344  345537 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 06:25:39.977663  345537 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1210 06:25:39.981988  345537 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1210 06:25:39.983156  345537 api_server.go:141] control plane version: v1.35.0-beta.0
	I1210 06:25:39.983187  345537 api_server.go:131] duration metric: took 2.506191082s to wait for apiserver health ...
	I1210 06:25:39.983200  345537 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 06:25:39.989391  345537 system_pods.go:59] 8 kube-system pods found
	I1210 06:25:39.989430  345537 system_pods.go:61] "coredns-7d764666f9-rsznm" [0ac06f22-e09b-497c-ad77-f09e614de459] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1210 06:25:39.989439  345537 system_pods.go:61] "etcd-newest-cni-126107" [01d020b0-65ef-48ac-a7fc-abd86d760e8f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 06:25:39.989446  345537 system_pods.go:61] "kindnet-xj7td" [3cf83d19-8dae-4734-bdb5-0ce2410f4c99] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1210 06:25:39.989457  345537 system_pods.go:61] "kube-apiserver-newest-cni-126107" [984910c9-c993-4791-9830-55f3632d1af4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 06:25:39.989462  345537 system_pods.go:61] "kube-controller-manager-newest-cni-126107" [a811eae5-9f29-4614-9ab8-22c76a55f3b3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 06:25:39.989482  345537 system_pods.go:61] "kube-proxy-sxc9w" [7bc19225-90f1-4759-bb4f-bc2da959865d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1210 06:25:39.989492  345537 system_pods.go:61] "kube-scheduler-newest-cni-126107" [689e6051-ab4a-4edc-be1d-b6aa4b77b3a4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 06:25:39.989500  345537 system_pods.go:61] "storage-provisioner" [e274ee92-ba8d-446f-a4d8-dd2e9c49ca78] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1210 06:25:39.989515  345537 system_pods.go:74] duration metric: took 6.307433ms to wait for pod list to return data ...
	I1210 06:25:39.989526  345537 default_sa.go:34] waiting for default service account to be created ...
	I1210 06:25:39.992513  345537 default_sa.go:45] found service account: "default"
	I1210 06:25:39.992539  345537 default_sa.go:55] duration metric: took 3.007316ms for default service account to be created ...
	I1210 06:25:39.992562  345537 kubeadm.go:587] duration metric: took 2.728539448s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1210 06:25:39.992579  345537 node_conditions.go:102] verifying NodePressure condition ...
	I1210 06:25:39.994716  345537 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1210 06:25:39.994742  345537 node_conditions.go:123] node cpu capacity is 8
	I1210 06:25:39.994755  345537 node_conditions.go:105] duration metric: took 2.171889ms to run NodePressure ...
	I1210 06:25:39.994765  345537 start.go:242] waiting for startup goroutines ...
	I1210 06:25:39.994772  345537 start.go:247] waiting for cluster config update ...
	I1210 06:25:39.994782  345537 start.go:256] writing updated cluster config ...
	I1210 06:25:39.995033  345537 ssh_runner.go:195] Run: rm -f paused
	I1210 06:25:40.048347  345537 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1210 06:25:40.050410  345537 out.go:179] * Done! kubectl is now configured to use "newest-cni-126107" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 10 06:25:39 newest-cni-126107 crio[521]: time="2025-12-10T06:25:39.815166938Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-sxc9w/POD" id=bb7cf586-9510-4d05-87e2-e50fd9515391 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 06:25:39 newest-cni-126107 crio[521]: time="2025-12-10T06:25:39.815220765Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:25:39 newest-cni-126107 crio[521]: time="2025-12-10T06:25:39.817273041Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 10 06:25:39 newest-cni-126107 crio[521]: time="2025-12-10T06:25:39.818287056Z" level=info msg="Ran pod sandbox e38f460d235ea0848304127ddd2b8109ca812a52d7cbb6575fd6c99c2b0f2699 with infra container: kube-system/kindnet-xj7td/POD" id=baaf3ced-f22b-4e33-81aa-4813dc18cf18 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 06:25:39 newest-cni-126107 crio[521]: time="2025-12-10T06:25:39.818425205Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=bb7cf586-9510-4d05-87e2-e50fd9515391 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 06:25:39 newest-cni-126107 crio[521]: time="2025-12-10T06:25:39.81963345Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=d8f7afad-7059-4857-ac83-85fa0eef2a84 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:25:39 newest-cni-126107 crio[521]: time="2025-12-10T06:25:39.820701774Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 10 06:25:39 newest-cni-126107 crio[521]: time="2025-12-10T06:25:39.821581144Z" level=info msg="Ran pod sandbox 1b17add6060493f78ee6f799a460f2dd77e3d2d98eff4c627ae8fa92eee70192 with infra container: kube-system/kube-proxy-sxc9w/POD" id=bb7cf586-9510-4d05-87e2-e50fd9515391 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 06:25:39 newest-cni-126107 crio[521]: time="2025-12-10T06:25:39.82354788Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=fc484a46-ce18-4a41-acd3-f750d4aa5bd5 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:25:39 newest-cni-126107 crio[521]: time="2025-12-10T06:25:39.824545802Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=33c6c654-9d23-4752-97ea-334b460fc455 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:25:39 newest-cni-126107 crio[521]: time="2025-12-10T06:25:39.824844843Z" level=info msg="Creating container: kube-system/kindnet-xj7td/kindnet-cni" id=673548e5-21fa-4dff-81ce-dd2f9e037993 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:25:39 newest-cni-126107 crio[521]: time="2025-12-10T06:25:39.824951218Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:25:39 newest-cni-126107 crio[521]: time="2025-12-10T06:25:39.825839787Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=5b1718df-4ff9-4b80-b674-9103521a8aef name=/runtime.v1.ImageService/ImageStatus
	Dec 10 06:25:39 newest-cni-126107 crio[521]: time="2025-12-10T06:25:39.828607247Z" level=info msg="Creating container: kube-system/kube-proxy-sxc9w/kube-proxy" id=b4fefbf4-c9e4-42fd-b5b7-ce9d64cff55d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:25:39 newest-cni-126107 crio[521]: time="2025-12-10T06:25:39.828735242Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:25:39 newest-cni-126107 crio[521]: time="2025-12-10T06:25:39.829155912Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:25:39 newest-cni-126107 crio[521]: time="2025-12-10T06:25:39.829724125Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:25:39 newest-cni-126107 crio[521]: time="2025-12-10T06:25:39.836720345Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:25:39 newest-cni-126107 crio[521]: time="2025-12-10T06:25:39.837400928Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 06:25:39 newest-cni-126107 crio[521]: time="2025-12-10T06:25:39.860802685Z" level=info msg="Created container 4c8d653511396a1d9eae8851e8d4ea46706940e86eccaaa3b8b1c0e6b5f5805d: kube-system/kindnet-xj7td/kindnet-cni" id=673548e5-21fa-4dff-81ce-dd2f9e037993 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:25:39 newest-cni-126107 crio[521]: time="2025-12-10T06:25:39.861484601Z" level=info msg="Starting container: 4c8d653511396a1d9eae8851e8d4ea46706940e86eccaaa3b8b1c0e6b5f5805d" id=a2e65d14-7e8b-4611-a1cf-e537774d3c75 name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 06:25:39 newest-cni-126107 crio[521]: time="2025-12-10T06:25:39.863884811Z" level=info msg="Started container" PID=1053 containerID=4c8d653511396a1d9eae8851e8d4ea46706940e86eccaaa3b8b1c0e6b5f5805d description=kube-system/kindnet-xj7td/kindnet-cni id=a2e65d14-7e8b-4611-a1cf-e537774d3c75 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e38f460d235ea0848304127ddd2b8109ca812a52d7cbb6575fd6c99c2b0f2699
	Dec 10 06:25:39 newest-cni-126107 crio[521]: time="2025-12-10T06:25:39.864998629Z" level=info msg="Created container 31e4cd08f91ffa6e63d872d64de0a144e7b7de25fe0dcf9dcda8fd9394deeeb7: kube-system/kube-proxy-sxc9w/kube-proxy" id=b4fefbf4-c9e4-42fd-b5b7-ce9d64cff55d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 06:25:39 newest-cni-126107 crio[521]: time="2025-12-10T06:25:39.865895105Z" level=info msg="Starting container: 31e4cd08f91ffa6e63d872d64de0a144e7b7de25fe0dcf9dcda8fd9394deeeb7" id=b954b46f-75f9-4091-af02-5cdac4cae4a1 name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 06:25:39 newest-cni-126107 crio[521]: time="2025-12-10T06:25:39.870265782Z" level=info msg="Started container" PID=1054 containerID=31e4cd08f91ffa6e63d872d64de0a144e7b7de25fe0dcf9dcda8fd9394deeeb7 description=kube-system/kube-proxy-sxc9w/kube-proxy id=b954b46f-75f9-4091-af02-5cdac4cae4a1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1b17add6060493f78ee6f799a460f2dd77e3d2d98eff4c627ae8fa92eee70192
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	31e4cd08f91ff       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810   5 seconds ago       Running             kube-proxy                1                   1b17add606049       kube-proxy-sxc9w                            kube-system
	4c8d653511396       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   6 seconds ago       Running             kindnet-cni               1                   e38f460d235ea       kindnet-xj7td                               kube-system
	2a2db7437d32a       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   8 seconds ago       Running             etcd                      1                   84fa657a84fbc       etcd-newest-cni-126107                      kube-system
	9503f5a9aae53       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b   8 seconds ago       Running             kube-apiserver            1                   2cf788f34b8c3       kube-apiserver-newest-cni-126107            kube-system
	9cc3c395184c1       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc   8 seconds ago       Running             kube-controller-manager   1                   7926444c27431       kube-controller-manager-newest-cni-126107   kube-system
	d55554d77c312       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46   8 seconds ago       Running             kube-scheduler            1                   57ad64da54a22       kube-scheduler-newest-cni-126107            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-126107
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-126107
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=edc6abd3c0573b88c7a02dc35aa0b985627fa3e9
	                    minikube.k8s.io/name=newest-cni-126107
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T06_25_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 06:25:16 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	                    node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-126107
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 06:25:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 06:25:38 +0000   Wed, 10 Dec 2025 06:25:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 06:25:38 +0000   Wed, 10 Dec 2025 06:25:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 06:25:38 +0000   Wed, 10 Dec 2025 06:25:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 10 Dec 2025 06:25:38 +0000   Wed, 10 Dec 2025 06:25:14 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-126107
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 0992b7e47f4f804d2f02c3066938a460
	  System UUID:                48dcb149-8660-4400-bf91-b049b5a968fc
	  Boot ID:                    cce7104c-1270-4b6b-af66-b04ce0de633c
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-126107                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         27s
	  kube-system                 kindnet-xj7td                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      22s
	  kube-system                 kube-apiserver-newest-cni-126107             250m (3%)     0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-controller-manager-newest-cni-126107    200m (2%)     0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-proxy-sxc9w                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         22s
	  kube-system                 kube-scheduler-newest-cni-126107             100m (1%)     0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  23s   node-controller  Node newest-cni-126107 event: Registered Node newest-cni-126107 in Controller
	  Normal  RegisteredNode  4s    node-controller  Node newest-cni-126107 event: Registered Node newest-cni-126107 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 67 21 4d 5d 12 08 06
	[Dec10 06:21] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 7e b1 cc cb 4a c1 08 06
	[  +0.000451] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f6 67 21 4d 5d 12 08 06
	[ +47.984386] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 4e cf 84 e2 64 08 06
	[  +1.136322] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000001] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0e cf a5 c8 c4 7c 08 06
	[  +0.000001] ll header: 00000000: ff ff ff ff ff ff 4e 6d f9 62 31 99 08 06
	[Dec10 06:22] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 16 a6 95 b8 c2 8a 08 06
	[ +10.598490] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 42 35 90 e5 6e e9 08 06
	[  +0.000401] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 36 4e cf 84 e2 64 08 06
	[ +28.872835] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a 53 b5 51 38 03 08 06
	[  +0.000413] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4e 6d f9 62 31 99 08 06
	[  +9.820727] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6e c5 0b 85 ba 10 08 06
	[  +0.000485] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 16 a6 95 b8 c2 8a 08 06
	
	
	==> etcd [2a2db7437d32a2b904b0d325d8814b054a94b5f466e98eaa0b90cde7bfed80c0] <==
	{"level":"warn","ts":"2025-12-10T06:25:38.014198Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:25:38.022122Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:25:38.030069Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:25:38.039555Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:25:38.047334Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:25:38.054265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:25:38.061009Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:25:38.068502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:25:38.075609Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:25:38.083740Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:25:38.093603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:25:38.106932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:25:38.115692Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:25:38.130688Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:25:38.137881Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:25:38.144767Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:25:38.152084Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:25:38.158912Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:25:38.176007Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:25:38.184390Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:25:38.200535Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:25:38.207649Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:25:38.215921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:25:38.223114Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T06:25:38.284328Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43476","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 06:25:45 up  1:08,  0 user,  load average: 4.43, 4.74, 3.07
	Linux newest-cni-126107 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4c8d653511396a1d9eae8851e8d4ea46706940e86eccaaa3b8b1c0e6b5f5805d] <==
	I1210 06:25:40.128994       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1210 06:25:40.129298       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1210 06:25:40.129480       1 main.go:148] setting mtu 1500 for CNI 
	I1210 06:25:40.129503       1 main.go:178] kindnetd IP family: "ipv4"
	I1210 06:25:40.129517       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-10T06:25:40Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1210 06:25:40.330870       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1210 06:25:40.330922       1 controller.go:381] "Waiting for informer caches to sync"
	I1210 06:25:40.330936       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1210 06:25:40.331080       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1210 06:25:40.728196       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1210 06:25:40.728264       1 metrics.go:72] Registering metrics
	I1210 06:25:40.728522       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [9503f5a9aae53addbfb52e5d4088bf4caff61cd80df691ee52d82c0aae7e9a7c] <==
	I1210 06:25:38.794826       1 cache.go:39] Caches are synced for autoregister controller
	I1210 06:25:38.795622       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1210 06:25:38.796048       1 shared_informer.go:377] "Caches are synced"
	I1210 06:25:38.796130       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1210 06:25:38.796151       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1210 06:25:38.796581       1 shared_informer.go:377] "Caches are synced"
	I1210 06:25:38.796626       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1210 06:25:38.800529       1 shared_informer.go:377] "Caches are synced"
	I1210 06:25:38.800555       1 policy_source.go:248] refreshing policies
	I1210 06:25:38.801616       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1210 06:25:38.803237       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1210 06:25:38.826081       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1210 06:25:38.841961       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1210 06:25:39.093585       1 controller.go:667] quota admission added evaluator for: namespaces
	I1210 06:25:39.125203       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1210 06:25:39.153663       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1210 06:25:39.164152       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1210 06:25:39.172783       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1210 06:25:39.213478       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.207.203"}
	I1210 06:25:39.225939       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.242.94"}
	I1210 06:25:39.697827       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1210 06:25:42.348556       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1210 06:25:42.348595       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1210 06:25:42.397924       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1210 06:25:42.498802       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [9cc3c395184c12a3759801f1587207d9b0431f0494a36ccbf5f56ab01df6ba76] <==
	I1210 06:25:41.951754       1 shared_informer.go:377] "Caches are synced"
	I1210 06:25:41.951743       1 shared_informer.go:377] "Caches are synced"
	I1210 06:25:41.951707       1 shared_informer.go:377] "Caches are synced"
	I1210 06:25:41.951756       1 shared_informer.go:370] "Waiting for caches to sync"
	I1210 06:25:41.952132       1 shared_informer.go:377] "Caches are synced"
	I1210 06:25:41.953015       1 shared_informer.go:377] "Caches are synced"
	I1210 06:25:41.953040       1 shared_informer.go:377] "Caches are synced"
	I1210 06:25:41.953067       1 shared_informer.go:377] "Caches are synced"
	I1210 06:25:41.953084       1 shared_informer.go:377] "Caches are synced"
	I1210 06:25:41.953100       1 shared_informer.go:377] "Caches are synced"
	I1210 06:25:41.953116       1 shared_informer.go:377] "Caches are synced"
	I1210 06:25:41.953158       1 shared_informer.go:377] "Caches are synced"
	I1210 06:25:41.953209       1 shared_informer.go:377] "Caches are synced"
	I1210 06:25:41.953278       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1210 06:25:41.953303       1 shared_informer.go:377] "Caches are synced"
	I1210 06:25:41.953337       1 shared_informer.go:377] "Caches are synced"
	I1210 06:25:41.953352       1 shared_informer.go:377] "Caches are synced"
	I1210 06:25:41.953349       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="newest-cni-126107"
	I1210 06:25:41.953421       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1210 06:25:41.958792       1 shared_informer.go:377] "Caches are synced"
	I1210 06:25:41.959225       1 shared_informer.go:370] "Waiting for caches to sync"
	I1210 06:25:42.053077       1 shared_informer.go:377] "Caches are synced"
	I1210 06:25:42.053100       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1210 06:25:42.053110       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1210 06:25:42.059820       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [31e4cd08f91ffa6e63d872d64de0a144e7b7de25fe0dcf9dcda8fd9394deeeb7] <==
	I1210 06:25:39.922881       1 server_linux.go:53] "Using iptables proxy"
	I1210 06:25:39.983670       1 shared_informer.go:370] "Waiting for caches to sync"
	I1210 06:25:40.084490       1 shared_informer.go:377] "Caches are synced"
	I1210 06:25:40.084557       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1210 06:25:40.084655       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 06:25:40.105446       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1210 06:25:40.105542       1 server_linux.go:136] "Using iptables Proxier"
	I1210 06:25:40.112357       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 06:25:40.112871       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1210 06:25:40.112999       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 06:25:40.116659       1 config.go:200] "Starting service config controller"
	I1210 06:25:40.116714       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 06:25:40.116807       1 config.go:309] "Starting node config controller"
	I1210 06:25:40.116829       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 06:25:40.116852       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 06:25:40.116886       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 06:25:40.116893       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 06:25:40.116908       1 config.go:106] "Starting endpoint slice config controller"
	I1210 06:25:40.116913       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 06:25:40.217784       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1210 06:25:40.217805       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1210 06:25:40.217844       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [d55554d77c312dbafd4b804752687f64bb10aeb9c0ec85e5b2d7595fd1258bf6] <==
	I1210 06:25:37.461963       1 serving.go:386] Generated self-signed cert in-memory
	W1210 06:25:38.741439       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1210 06:25:38.741635       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1210 06:25:38.741701       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1210 06:25:38.741733       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1210 06:25:38.765950       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1210 06:25:38.766055       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 06:25:38.768415       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 06:25:38.768563       1 shared_informer.go:370] "Waiting for caches to sync"
	I1210 06:25:38.768640       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1210 06:25:38.768668       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1210 06:25:38.869766       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 10 06:25:38 newest-cni-126107 kubelet[673]: I1210 06:25:38.821890     673 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Dec 10 06:25:38 newest-cni-126107 kubelet[673]: E1210 06:25:38.825953     673 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-126107\" already exists" pod="kube-system/kube-controller-manager-newest-cni-126107"
	Dec 10 06:25:38 newest-cni-126107 kubelet[673]: I1210 06:25:38.825994     673 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-126107"
	Dec 10 06:25:38 newest-cni-126107 kubelet[673]: E1210 06:25:38.835674     673 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-126107\" already exists" pod="kube-system/kube-scheduler-newest-cni-126107"
	Dec 10 06:25:38 newest-cni-126107 kubelet[673]: I1210 06:25:38.835710     673 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-126107"
	Dec 10 06:25:38 newest-cni-126107 kubelet[673]: E1210 06:25:38.843799     673 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-126107\" already exists" pod="kube-system/etcd-newest-cni-126107"
	Dec 10 06:25:38 newest-cni-126107 kubelet[673]: I1210 06:25:38.843836     673 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-126107"
	Dec 10 06:25:38 newest-cni-126107 kubelet[673]: E1210 06:25:38.852357     673 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-126107\" already exists" pod="kube-system/kube-apiserver-newest-cni-126107"
	Dec 10 06:25:39 newest-cni-126107 kubelet[673]: I1210 06:25:39.503599     673 apiserver.go:52] "Watching apiserver"
	Dec 10 06:25:39 newest-cni-126107 kubelet[673]: E1210 06:25:39.508222     673 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-126107" containerName="kube-controller-manager"
	Dec 10 06:25:39 newest-cni-126107 kubelet[673]: I1210 06:25:39.508695     673 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 10 06:25:39 newest-cni-126107 kubelet[673]: I1210 06:25:39.536954     673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7bc19225-90f1-4759-bb4f-bc2da959865d-xtables-lock\") pod \"kube-proxy-sxc9w\" (UID: \"7bc19225-90f1-4759-bb4f-bc2da959865d\") " pod="kube-system/kube-proxy-sxc9w"
	Dec 10 06:25:39 newest-cni-126107 kubelet[673]: I1210 06:25:39.537010     673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/3cf83d19-8dae-4734-bdb5-0ce2410f4c99-cni-cfg\") pod \"kindnet-xj7td\" (UID: \"3cf83d19-8dae-4734-bdb5-0ce2410f4c99\") " pod="kube-system/kindnet-xj7td"
	Dec 10 06:25:39 newest-cni-126107 kubelet[673]: I1210 06:25:39.537070     673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3cf83d19-8dae-4734-bdb5-0ce2410f4c99-lib-modules\") pod \"kindnet-xj7td\" (UID: \"3cf83d19-8dae-4734-bdb5-0ce2410f4c99\") " pod="kube-system/kindnet-xj7td"
	Dec 10 06:25:39 newest-cni-126107 kubelet[673]: I1210 06:25:39.537103     673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7bc19225-90f1-4759-bb4f-bc2da959865d-lib-modules\") pod \"kube-proxy-sxc9w\" (UID: \"7bc19225-90f1-4759-bb4f-bc2da959865d\") " pod="kube-system/kube-proxy-sxc9w"
	Dec 10 06:25:39 newest-cni-126107 kubelet[673]: I1210 06:25:39.537132     673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3cf83d19-8dae-4734-bdb5-0ce2410f4c99-xtables-lock\") pod \"kindnet-xj7td\" (UID: \"3cf83d19-8dae-4734-bdb5-0ce2410f4c99\") " pod="kube-system/kindnet-xj7td"
	Dec 10 06:25:39 newest-cni-126107 kubelet[673]: I1210 06:25:39.561117     673 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-126107"
	Dec 10 06:25:39 newest-cni-126107 kubelet[673]: E1210 06:25:39.561384     673 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-126107" containerName="kube-apiserver"
	Dec 10 06:25:39 newest-cni-126107 kubelet[673]: E1210 06:25:39.561878     673 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-126107" containerName="kube-scheduler"
	Dec 10 06:25:39 newest-cni-126107 kubelet[673]: E1210 06:25:39.568586     673 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-126107\" already exists" pod="kube-system/etcd-newest-cni-126107"
	Dec 10 06:25:39 newest-cni-126107 kubelet[673]: E1210 06:25:39.568701     673 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-126107" containerName="etcd"
	Dec 10 06:25:40 newest-cni-126107 kubelet[673]: E1210 06:25:40.569292     673 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-126107" containerName="etcd"
	Dec 10 06:25:41 newest-cni-126107 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 10 06:25:41 newest-cni-126107 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 10 06:25:41 newest-cni-126107 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-126107 -n newest-cni-126107
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-126107 -n newest-cni-126107: exit status 2 (321.732569ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-126107 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-rsznm storage-provisioner dashboard-metrics-scraper-867fb5f87b-zgrz2 kubernetes-dashboard-b84665fb8-wg4gj
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-126107 describe pod coredns-7d764666f9-rsznm storage-provisioner dashboard-metrics-scraper-867fb5f87b-zgrz2 kubernetes-dashboard-b84665fb8-wg4gj
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-126107 describe pod coredns-7d764666f9-rsznm storage-provisioner dashboard-metrics-scraper-867fb5f87b-zgrz2 kubernetes-dashboard-b84665fb8-wg4gj: exit status 1 (60.785421ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-rsznm" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-zgrz2" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-wg4gj" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-126107 describe pod coredns-7d764666f9-rsznm storage-provisioner dashboard-metrics-scraper-867fb5f87b-zgrz2 kubernetes-dashboard-b84665fb8-wg4gj: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (5.98s)

                                                
                                    

Test pass (353/415)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 8.64
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.24
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.16
12 TestDownloadOnly/v1.34.2/json-events 3.35
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.08
18 TestDownloadOnly/v1.34.2/DeleteAll 0.23
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.15
21 TestDownloadOnly/v1.35.0-beta.0/json-events 3.13
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.08
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.24
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.15
29 TestDownloadOnlyKic 0.41
30 TestBinaryMirror 0.82
31 TestOffline 51.75
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 124.81
40 TestAddons/serial/GCPAuth/Namespaces 0.14
41 TestAddons/serial/GCPAuth/FakeCredentials 7.44
57 TestAddons/StoppedEnableDisable 16.74
58 TestCertOptions 28.47
59 TestCertExpiration 223.25
61 TestForceSystemdFlag 28.48
62 TestForceSystemdEnv 26.89
67 TestErrorSpam/setup 21.53
68 TestErrorSpam/start 0.69
69 TestErrorSpam/status 0.95
70 TestErrorSpam/pause 5.57
71 TestErrorSpam/unpause 5.36
72 TestErrorSpam/stop 2.71
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 67.44
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 6.34
79 TestFunctional/serial/KubeContext 0.05
80 TestFunctional/serial/KubectlGetPods 0.08
83 TestFunctional/serial/CacheCmd/cache/add_remote 2.6
84 TestFunctional/serial/CacheCmd/cache/add_local 0.94
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.6
89 TestFunctional/serial/CacheCmd/cache/delete 0.13
90 TestFunctional/serial/MinikubeKubectlCmd 0.12
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
92 TestFunctional/serial/ExtraConfig 41.19
93 TestFunctional/serial/ComponentHealth 0.07
94 TestFunctional/serial/LogsCmd 1.23
95 TestFunctional/serial/LogsFileCmd 1.25
96 TestFunctional/serial/InvalidService 3.89
98 TestFunctional/parallel/ConfigCmd 0.44
99 TestFunctional/parallel/DashboardCmd 5.58
100 TestFunctional/parallel/DryRun 0.45
101 TestFunctional/parallel/InternationalLanguage 0.2
102 TestFunctional/parallel/StatusCmd 1.08
106 TestFunctional/parallel/ServiceCmdConnect 7.75
107 TestFunctional/parallel/AddonsCmd 0.18
108 TestFunctional/parallel/PersistentVolumeClaim 20.93
110 TestFunctional/parallel/SSHCmd 0.79
111 TestFunctional/parallel/CpCmd 2.22
112 TestFunctional/parallel/MySQL 23.64
113 TestFunctional/parallel/FileSync 0.32
114 TestFunctional/parallel/CertSync 2.16
118 TestFunctional/parallel/NodeLabels 0.08
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.66
122 TestFunctional/parallel/License 0.26
123 TestFunctional/parallel/Version/short 0.07
124 TestFunctional/parallel/Version/components 0.57
125 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
126 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
127 TestFunctional/parallel/ImageCommands/ImageListJson 0.25
128 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
129 TestFunctional/parallel/ImageCommands/ImageBuild 4.21
130 TestFunctional/parallel/ImageCommands/Setup 0.46
131 TestFunctional/parallel/UpdateContextCmd/no_changes 0.18
132 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.19
133 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.16
134 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.49
137 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.53
138 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
140 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 13.25
141 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.49
142 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.38
143 TestFunctional/parallel/ImageCommands/ImageRemove 0.52
144 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.71
145 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.41
146 TestFunctional/parallel/MountCmd/any-port 6.88
147 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
148 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
152 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
153 TestFunctional/parallel/MountCmd/specific-port 2
154 TestFunctional/parallel/MountCmd/VerifyCleanup 1.51
155 TestFunctional/parallel/ServiceCmd/DeployApp 6.14
156 TestFunctional/parallel/ProfileCmd/profile_not_create 0.47
157 TestFunctional/parallel/ProfileCmd/profile_list 0.48
158 TestFunctional/parallel/ProfileCmd/profile_json_output 0.46
159 TestFunctional/parallel/ServiceCmd/List 1.86
160 TestFunctional/parallel/ServiceCmd/JSONOutput 1.72
161 TestFunctional/parallel/ServiceCmd/HTTPS 0.55
162 TestFunctional/parallel/ServiceCmd/Format 0.54
163 TestFunctional/parallel/ServiceCmd/URL 0.53
164 TestFunctional/delete_echo-server_images 0.04
165 TestFunctional/delete_my-image_image 0.02
166 TestFunctional/delete_minikube_cached_images 0.02
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 37.42
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 6.22
174 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.05
175 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 0.06
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 2.6
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 0.9
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.07
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.06
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.31
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.56
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.13
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 0.13
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 0.12
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 61.75
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 0.07
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 1.29
190 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 1.3
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 4.09
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.5
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 7.47
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.45
196 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.19
197 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 1.1
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 7.57
202 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.18
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 22.69
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.66
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 1.85
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 22.76
208 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.32
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 1.89
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 0.06
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.59
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.23
218 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 7.21
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.45
221 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup 8.22
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 0.51
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 0.5
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.35
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.35
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.37
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 0
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.11
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.19
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.22
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.2
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.5
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.45
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 13.2
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.47
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.08
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.62
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 1.69
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.23
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.23
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.25
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 2.82
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.24
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 1.35
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 1.41
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.36
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.53
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 0.67
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.4
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 2.25
258 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 1.86
259 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.04
260 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.02
261 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.02
265 TestMultiControlPlane/serial/StartCluster 108.68
266 TestMultiControlPlane/serial/DeployApp 5.32
267 TestMultiControlPlane/serial/PingHostFromPods 1.09
268 TestMultiControlPlane/serial/AddWorkerNode 23.56
269 TestMultiControlPlane/serial/NodeLabels 0.07
270 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.92
271 TestMultiControlPlane/serial/CopyFile 17.25
272 TestMultiControlPlane/serial/StopSecondaryNode 18.87
273 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.74
274 TestMultiControlPlane/serial/RestartSecondaryNode 8.69
275 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.91
276 TestMultiControlPlane/serial/RestartClusterKeepsNodes 104.26
277 TestMultiControlPlane/serial/DeleteSecondaryNode 10.66
278 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.7
279 TestMultiControlPlane/serial/StopCluster 44.12
280 TestMultiControlPlane/serial/RestartCluster 51.79
281 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.7
282 TestMultiControlPlane/serial/AddSecondaryNode 69.81
283 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.92
288 TestJSONOutput/start/Command 40.29
289 TestJSONOutput/start/Audit 0
291 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
292 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
295 TestJSONOutput/pause/Audit 0
297 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
298 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
301 TestJSONOutput/unpause/Audit 0
303 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
304 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
306 TestJSONOutput/stop/Command 6.12
307 TestJSONOutput/stop/Audit 0
309 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
310 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
311 TestErrorJSONOutput 0.25
313 TestKicCustomNetwork/create_custom_network 29.24
314 TestKicCustomNetwork/use_default_bridge_network 25.81
315 TestKicExistingNetwork 25.63
316 TestKicCustomSubnet 23.15
317 TestKicStaticIP 23.76
318 TestMainNoArgs 0.06
319 TestMinikubeProfile 47.08
322 TestMountStart/serial/StartWithMountFirst 7.82
323 TestMountStart/serial/VerifyMountFirst 0.28
324 TestMountStart/serial/StartWithMountSecond 7.88
325 TestMountStart/serial/VerifyMountSecond 0.28
326 TestMountStart/serial/DeleteFirst 1.72
327 TestMountStart/serial/VerifyMountPostDelete 0.28
328 TestMountStart/serial/Stop 1.27
329 TestMountStart/serial/RestartStopped 7.45
330 TestMountStart/serial/VerifyMountPostStop 0.28
333 TestMultiNode/serial/FreshStart2Nodes 65.01
334 TestMultiNode/serial/DeployApp2Nodes 4.08
335 TestMultiNode/serial/PingHostFrom2Pods 0.76
336 TestMultiNode/serial/AddNode 53.57
337 TestMultiNode/serial/MultiNodeLabels 0.06
338 TestMultiNode/serial/ProfileList 0.67
339 TestMultiNode/serial/CopyFile 9.87
340 TestMultiNode/serial/StopNode 2.3
341 TestMultiNode/serial/StartAfterStop 7.22
342 TestMultiNode/serial/RestartKeepsNodes 82.73
343 TestMultiNode/serial/DeleteNode 5.27
344 TestMultiNode/serial/StopMultiNode 28.73
345 TestMultiNode/serial/RestartMultiNode 44.41
346 TestMultiNode/serial/ValidateNameConflict 22.14
351 TestPreload 102.88
353 TestScheduledStopUnix 95.27
356 TestInsufficientStorage 11.97
357 TestRunningBinaryUpgrade 64.55
359 TestKubernetesUpgrade 161
360 TestMissingContainerUpgrade 97.89
362 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
363 TestNoKubernetes/serial/StartWithK8s 31.75
364 TestNoKubernetes/serial/StartWithStopK8s 16.6
365 TestNoKubernetes/serial/Start 9.97
366 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
367 TestNoKubernetes/serial/VerifyK8sNotRunning 0.33
368 TestNoKubernetes/serial/ProfileList 32.23
376 TestNetworkPlugins/group/false 3.51
380 TestNoKubernetes/serial/Stop 1.3
381 TestNoKubernetes/serial/StartNoArgs 6.36
382 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
383 TestStoppedBinaryUpgrade/Setup 0.55
384 TestStoppedBinaryUpgrade/Upgrade 48.82
386 TestPause/serial/Start 70.33
387 TestStoppedBinaryUpgrade/MinikubeLogs 1.73
395 TestNetworkPlugins/group/auto/Start 42.64
396 TestNetworkPlugins/group/calico/Start 49.24
397 TestNetworkPlugins/group/custom-flannel/Start 51.04
398 TestNetworkPlugins/group/auto/KubeletFlags 0.43
399 TestNetworkPlugins/group/auto/NetCatPod 9.22
400 TestPause/serial/SecondStartNoReconfiguration 6.48
401 TestNetworkPlugins/group/calico/ControllerPod 6.01
402 TestNetworkPlugins/group/auto/DNS 0.12
403 TestNetworkPlugins/group/auto/Localhost 0.12
404 TestNetworkPlugins/group/auto/HairPin 0.1
406 TestNetworkPlugins/group/calico/KubeletFlags 0.31
407 TestNetworkPlugins/group/calico/NetCatPod 10.21
408 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.35
409 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.3
410 TestNetworkPlugins/group/kindnet/Start 75.56
411 TestNetworkPlugins/group/calico/DNS 0.13
412 TestNetworkPlugins/group/calico/Localhost 0.11
413 TestNetworkPlugins/group/calico/HairPin 0.12
414 TestNetworkPlugins/group/custom-flannel/DNS 0.11
415 TestNetworkPlugins/group/custom-flannel/Localhost 0.1
416 TestNetworkPlugins/group/custom-flannel/HairPin 0.09
417 TestNetworkPlugins/group/flannel/Start 46.39
418 TestNetworkPlugins/group/enable-default-cni/Start 64.62
419 TestNetworkPlugins/group/bridge/Start 70.31
420 TestNetworkPlugins/group/flannel/ControllerPod 6.01
421 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
422 TestNetworkPlugins/group/flannel/NetCatPod 8.18
423 TestNetworkPlugins/group/flannel/DNS 0.11
424 TestNetworkPlugins/group/flannel/Localhost 0.1
425 TestNetworkPlugins/group/flannel/HairPin 0.09
426 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
427 TestNetworkPlugins/group/kindnet/KubeletFlags 0.31
428 TestNetworkPlugins/group/kindnet/NetCatPod 8.21
429 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.32
430 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.19
431 TestNetworkPlugins/group/kindnet/DNS 0.12
432 TestNetworkPlugins/group/kindnet/Localhost 0.09
433 TestNetworkPlugins/group/kindnet/HairPin 0.1
435 TestStartStop/group/old-k8s-version/serial/FirstStart 52.06
436 TestNetworkPlugins/group/enable-default-cni/DNS 0.13
437 TestNetworkPlugins/group/enable-default-cni/Localhost 0.1
438 TestNetworkPlugins/group/enable-default-cni/HairPin 0.1
439 TestNetworkPlugins/group/bridge/KubeletFlags 0.33
440 TestNetworkPlugins/group/bridge/NetCatPod 9.21
441 TestNetworkPlugins/group/bridge/DNS 0.13
442 TestNetworkPlugins/group/bridge/Localhost 0.11
443 TestNetworkPlugins/group/bridge/HairPin 0.11
445 TestStartStop/group/no-preload/serial/FirstStart 52.88
447 TestStartStop/group/embed-certs/serial/FirstStart 46.42
449 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 44.48
450 TestStartStop/group/old-k8s-version/serial/DeployApp 8.29
452 TestStartStop/group/old-k8s-version/serial/Stop 16.22
453 TestStartStop/group/no-preload/serial/DeployApp 8.23
454 TestStartStop/group/embed-certs/serial/DeployApp 7.23
455 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
456 TestStartStop/group/old-k8s-version/serial/SecondStart 43.45
459 TestStartStop/group/no-preload/serial/Stop 18.22
460 TestStartStop/group/embed-certs/serial/Stop 18.58
461 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.29
463 TestStartStop/group/default-k8s-diff-port/serial/Stop 18.39
464 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.27
465 TestStartStop/group/no-preload/serial/SecondStart 44.88
466 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.26
467 TestStartStop/group/embed-certs/serial/SecondStart 50.32
468 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.26
469 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 52.83
470 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
471 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
472 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.29
475 TestStartStop/group/newest-cni/serial/FirstStart 25.91
476 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
477 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
478 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
479 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
481 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
482 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.26
484 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
485 TestStartStop/group/newest-cni/serial/DeployApp 0
487 TestStartStop/group/newest-cni/serial/Stop 2.47
488 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
489 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
490 TestStartStop/group/newest-cni/serial/SecondStart 10.37
491 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
493 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
494 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
495 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
x
+
TestDownloadOnly/v1.28.0/json-events (8.64s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-684743 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-684743 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (8.640839471s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (8.64s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1210 05:43:40.937085   12374 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1210 05:43:40.937201   12374 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-8832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-684743
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-684743: exit status 85 (74.198661ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-684743 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-684743 │ jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 05:43:32
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 05:43:32.352375   12386 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:43:32.352463   12386 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:43:32.352480   12386 out.go:374] Setting ErrFile to fd 2...
	I1210 05:43:32.352486   12386 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:43:32.353167   12386 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8832/.minikube/bin
	W1210 05:43:32.353297   12386 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22089-8832/.minikube/config/config.json: open /home/jenkins/minikube-integration/22089-8832/.minikube/config/config.json: no such file or directory
	I1210 05:43:32.353793   12386 out.go:368] Setting JSON to true
	I1210 05:43:32.354694   12386 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1563,"bootTime":1765343849,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 05:43:32.354756   12386 start.go:143] virtualization: kvm guest
	I1210 05:43:32.358628   12386 out.go:99] [download-only-684743] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 05:43:32.358817   12386 notify.go:221] Checking for updates...
	W1210 05:43:32.358810   12386 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22089-8832/.minikube/cache/preloaded-tarball: no such file or directory
	I1210 05:43:32.360029   12386 out.go:171] MINIKUBE_LOCATION=22089
	I1210 05:43:32.361359   12386 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 05:43:32.362943   12386 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22089-8832/kubeconfig
	I1210 05:43:32.364503   12386 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-8832/.minikube
	I1210 05:43:32.365909   12386 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1210 05:43:32.368262   12386 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1210 05:43:32.368463   12386 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 05:43:32.393155   12386 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 05:43:32.393267   12386 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:43:32.631021   12386 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:63 SystemTime:2025-12-10 05:43:32.621737808 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 05:43:32.631142   12386 docker.go:319] overlay module found
	I1210 05:43:32.632877   12386 out.go:99] Using the docker driver based on user configuration
	I1210 05:43:32.632914   12386 start.go:309] selected driver: docker
	I1210 05:43:32.632922   12386 start.go:927] validating driver "docker" against <nil>
	I1210 05:43:32.633017   12386 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:43:32.695731   12386 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:63 SystemTime:2025-12-10 05:43:32.684163289 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 05:43:32.695894   12386 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1210 05:43:32.696398   12386 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1210 05:43:32.696586   12386 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1210 05:43:32.698829   12386 out.go:171] Using Docker driver with root privileges
	I1210 05:43:32.700295   12386 cni.go:84] Creating CNI manager for ""
	I1210 05:43:32.700369   12386 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 05:43:32.700384   12386 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1210 05:43:32.700491   12386 start.go:353] cluster config:
	{Name:download-only-684743 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-684743 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:43:32.701895   12386 out.go:99] Starting "download-only-684743" primary control-plane node in "download-only-684743" cluster
	I1210 05:43:32.701917   12386 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 05:43:32.703148   12386 out.go:99] Pulling base image v0.0.48-1765319469-22089 ...
	I1210 05:43:32.703190   12386 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1210 05:43:32.703280   12386 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon
	I1210 05:43:32.717879   12386 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1210 05:43:32.717911   12386 cache.go:65] Caching tarball of preloaded images
	I1210 05:43:32.718124   12386 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1210 05:43:32.720373   12386 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1210 05:43:32.720406   12386 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1210 05:43:32.722184   12386 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca to local cache
	I1210 05:43:32.722370   12386 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local cache directory
	I1210 05:43:32.722478   12386 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca to local cache
	I1210 05:43:32.743451   12386 preload.go:295] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1210 05:43:32.743621   12386 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/22089-8832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1210 05:43:35.800661   12386 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1210 05:43:35.801105   12386 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/download-only-684743/config.json ...
	I1210 05:43:35.801142   12386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/download-only-684743/config.json: {Name:mk30247b4c6a1ab3346fb8cae8b48e929183aa45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:43:35.801338   12386 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1210 05:43:35.801574   12386 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/22089-8832/.minikube/cache/linux/amd64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-684743 host does not exist
	  To start a cluster, run: "minikube start -p download-only-684743"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-684743
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (3.35s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-560719 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-560719 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (3.345557147s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (3.35s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1210 05:43:44.751724   12374 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1210 05:43:44.751784   12374 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-8832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-560719
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-560719: exit status 85 (75.817929ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-684743 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-684743 │ jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ delete  │ -p download-only-684743                                                                                                                                                   │ download-only-684743 │ jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ start   │ -o=json --download-only -p download-only-560719 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-560719 │ jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 05:43:41
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 05:43:41.458332   12743 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:43:41.458450   12743 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:43:41.458457   12743 out.go:374] Setting ErrFile to fd 2...
	I1210 05:43:41.458464   12743 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:43:41.458669   12743 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8832/.minikube/bin
	I1210 05:43:41.459130   12743 out.go:368] Setting JSON to true
	I1210 05:43:41.459948   12743 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1572,"bootTime":1765343849,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 05:43:41.460002   12743 start.go:143] virtualization: kvm guest
	I1210 05:43:41.462076   12743 out.go:99] [download-only-560719] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 05:43:41.462287   12743 notify.go:221] Checking for updates...
	I1210 05:43:41.463664   12743 out.go:171] MINIKUBE_LOCATION=22089
	I1210 05:43:41.465430   12743 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 05:43:41.466679   12743 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22089-8832/kubeconfig
	I1210 05:43:41.467873   12743 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-8832/.minikube
	I1210 05:43:41.469070   12743 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1210 05:43:41.471264   12743 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1210 05:43:41.471554   12743 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 05:43:41.494883   12743 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 05:43:41.494973   12743 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:43:41.554174   12743 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-12-10 05:43:41.54351898 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 05:43:41.554280   12743 docker.go:319] overlay module found
	I1210 05:43:41.556029   12743 out.go:99] Using the docker driver based on user configuration
	I1210 05:43:41.556092   12743 start.go:309] selected driver: docker
	I1210 05:43:41.556097   12743 start.go:927] validating driver "docker" against <nil>
	I1210 05:43:41.556180   12743 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:43:41.614174   12743 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-12-10 05:43:41.605250551 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 05:43:41.614339   12743 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1210 05:43:41.614904   12743 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1210 05:43:41.615063   12743 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1210 05:43:41.617342   12743 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-560719 host does not exist
	  To start a cluster, run: "minikube start -p download-only-560719"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-560719
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (3.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-656073 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-656073 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (3.134129094s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (3.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1210 05:43:48.346232   12374 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
I1210 05:43:48.346279   12374 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-8832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-656073
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-656073: exit status 85 (79.024979ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                       ARGS                                                                                       │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-684743 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio        │ download-only-684743 │ jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │                     │
	│ delete  │ --all                                                                                                                                                                            │ minikube             │ jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ delete  │ -p download-only-684743                                                                                                                                                          │ download-only-684743 │ jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ start   │ -o=json --download-only -p download-only-560719 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio        │ download-only-560719 │ jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │                     │
	│ delete  │ --all                                                                                                                                                                            │ minikube             │ jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ delete  │ -p download-only-560719                                                                                                                                                          │ download-only-560719 │ jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ start   │ -o=json --download-only -p download-only-656073 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-656073 │ jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 05:43:45
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 05:43:45.263920   13098 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:43:45.264019   13098 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:43:45.264027   13098 out.go:374] Setting ErrFile to fd 2...
	I1210 05:43:45.264031   13098 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:43:45.264236   13098 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8832/.minikube/bin
	I1210 05:43:45.264700   13098 out.go:368] Setting JSON to true
	I1210 05:43:45.265450   13098 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1576,"bootTime":1765343849,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 05:43:45.265525   13098 start.go:143] virtualization: kvm guest
	I1210 05:43:45.267399   13098 out.go:99] [download-only-656073] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 05:43:45.267608   13098 notify.go:221] Checking for updates...
	I1210 05:43:45.268757   13098 out.go:171] MINIKUBE_LOCATION=22089
	I1210 05:43:45.270215   13098 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 05:43:45.271637   13098 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22089-8832/kubeconfig
	I1210 05:43:45.272884   13098 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-8832/.minikube
	I1210 05:43:45.274129   13098 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1210 05:43:45.276971   13098 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1210 05:43:45.277251   13098 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 05:43:45.301645   13098 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 05:43:45.301729   13098 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:43:45.362029   13098 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-10 05:43:45.352550429 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 05:43:45.362128   13098 docker.go:319] overlay module found
	I1210 05:43:45.363804   13098 out.go:99] Using the docker driver based on user configuration
	I1210 05:43:45.363840   13098 start.go:309] selected driver: docker
	I1210 05:43:45.363846   13098 start.go:927] validating driver "docker" against <nil>
	I1210 05:43:45.363934   13098 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:43:45.424732   13098 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-10 05:43:45.414604933 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 05:43:45.424878   13098 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1210 05:43:45.425386   13098 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1210 05:43:45.425571   13098 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1210 05:43:45.427389   13098 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-656073 host does not exist
	  To start a cluster, run: "minikube start -p download-only-656073"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-656073
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.41s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-773795 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "download-docker-773795" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-773795
--- PASS: TestDownloadOnlyKic (0.41s)

                                                
                                    
x
+
TestBinaryMirror (0.82s)

                                                
                                                
=== RUN   TestBinaryMirror
I1210 05:43:49.665879   12374 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-740663 --alsologtostderr --binary-mirror http://127.0.0.1:43475 --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "binary-mirror-740663" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-740663
--- PASS: TestBinaryMirror (0.82s)

                                                
                                    
x
+
TestOffline (51.75s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-780437 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-780437 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (49.191101551s)
helpers_test.go:176: Cleaning up "offline-crio-780437" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-780437
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-780437: (2.562988873s)
--- PASS: TestOffline (51.75s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-028052
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-028052: exit status 85 (68.380212ms)

                                                
                                                
-- stdout --
	* Profile "addons-028052" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-028052"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-028052
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-028052: exit status 85 (68.796932ms)

                                                
                                                
-- stdout --
	* Profile "addons-028052" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-028052"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (124.81s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-028052 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-028052 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m4.808782179s)
--- PASS: TestAddons/Setup (124.81s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-028052 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-028052 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (7.44s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-028052 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-028052 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [dddaa4c9-8f7c-4f58-876b-d749ce609491] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [dddaa4c9-8f7c-4f58-876b-d749ce609491] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 7.003666097s
addons_test.go:696: (dbg) Run:  kubectl --context addons-028052 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-028052 describe sa gcp-auth-test
addons_test.go:746: (dbg) Run:  kubectl --context addons-028052 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (7.44s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (16.74s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-028052
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-028052: (16.439207221s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-028052
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-028052
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-028052
--- PASS: TestAddons/StoppedEnableDisable (16.74s)

                                                
                                    
x
+
TestCertOptions (28.47s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-088618 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-088618 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (21.079897931s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-088618 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-088618 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-088618 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-088618" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-088618
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-088618: (6.557846713s)
--- PASS: TestCertOptions (28.47s)

                                                
                                    
x
+
TestCertExpiration (223.25s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-936135 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-936135 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (33.065874767s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-936135 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-936135 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (7.335762442s)
helpers_test.go:176: Cleaning up "cert-expiration-936135" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-936135
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-936135: (2.85057671s)
--- PASS: TestCertExpiration (223.25s)

                                                
                                    
x
+
TestForceSystemdFlag (28.48s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-182760 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-182760 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (25.410429729s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-182760 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:176: Cleaning up "force-systemd-flag-182760" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-182760
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-182760: (2.767203707s)
--- PASS: TestForceSystemdFlag (28.48s)

                                                
                                    
x
+
TestForceSystemdEnv (26.89s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-605756 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-605756 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (24.319615906s)
helpers_test.go:176: Cleaning up "force-systemd-env-605756" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-605756
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-605756: (2.56596977s)
--- PASS: TestForceSystemdEnv (26.89s)

                                                
                                    
x
+
TestErrorSpam/setup (21.53s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-324485 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-324485 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-324485 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-324485 --driver=docker  --container-runtime=crio: (21.527886085s)
--- PASS: TestErrorSpam/setup (21.53s)

                                                
                                    
x
+
TestErrorSpam/start (0.69s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-324485 --log_dir /tmp/nospam-324485 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-324485 --log_dir /tmp/nospam-324485 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-324485 --log_dir /tmp/nospam-324485 start --dry-run
--- PASS: TestErrorSpam/start (0.69s)

                                                
                                    
x
+
TestErrorSpam/status (0.95s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-324485 --log_dir /tmp/nospam-324485 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-324485 --log_dir /tmp/nospam-324485 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-324485 --log_dir /tmp/nospam-324485 status
--- PASS: TestErrorSpam/status (0.95s)

                                                
                                    
x
+
TestErrorSpam/pause (5.57s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-324485 --log_dir /tmp/nospam-324485 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-324485 --log_dir /tmp/nospam-324485 pause: exit status 80 (2.374821501s)

                                                
                                                
-- stdout --
	* Pausing node nospam-324485 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T05:49:29Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-324485 --log_dir /tmp/nospam-324485 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-324485 --log_dir /tmp/nospam-324485 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-324485 --log_dir /tmp/nospam-324485 pause: exit status 80 (1.601847183s)

                                                
                                                
-- stdout --
	* Pausing node nospam-324485 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T05:49:30Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-324485 --log_dir /tmp/nospam-324485 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-324485 --log_dir /tmp/nospam-324485 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-324485 --log_dir /tmp/nospam-324485 pause: exit status 80 (1.591667887s)

                                                
                                                
-- stdout --
	* Pausing node nospam-324485 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T05:49:32Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-324485 --log_dir /tmp/nospam-324485 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (5.57s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.36s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-324485 --log_dir /tmp/nospam-324485 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-324485 --log_dir /tmp/nospam-324485 unpause: exit status 80 (1.839652244s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-324485 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T05:49:34Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-324485 --log_dir /tmp/nospam-324485 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-324485 --log_dir /tmp/nospam-324485 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-324485 --log_dir /tmp/nospam-324485 unpause: exit status 80 (1.590846808s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-324485 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T05:49:35Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-324485 --log_dir /tmp/nospam-324485 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-324485 --log_dir /tmp/nospam-324485 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-324485 --log_dir /tmp/nospam-324485 unpause: exit status 80 (1.930045701s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-324485 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T05:49:37Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-324485 --log_dir /tmp/nospam-324485 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.36s)

                                                
                                    
x
+
TestErrorSpam/stop (2.71s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-324485 --log_dir /tmp/nospam-324485 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-324485 --log_dir /tmp/nospam-324485 stop: (2.492049428s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-324485 --log_dir /tmp/nospam-324485 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-324485 --log_dir /tmp/nospam-324485 stop
--- PASS: TestErrorSpam/stop (2.71s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22089-8832/.minikube/files/etc/test/nested/copy/12374/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (67.44s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-237456 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-237456 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m7.443399858s)
--- PASS: TestFunctional/serial/StartWithProxy (67.44s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.34s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1210 05:50:52.198671   12374 config.go:182] Loaded profile config "functional-237456": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-237456 --alsologtostderr -v=8
E1210 05:50:56.055978   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:50:56.062522   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:50:56.074790   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:50:56.096213   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:50:56.137480   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:50:56.219052   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:50:56.380717   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:50:56.702994   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:50:57.344551   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-237456 --alsologtostderr -v=8: (6.332828547s)
functional_test.go:678: soft start took 6.333649407s for "functional-237456" cluster.
I1210 05:50:58.534228   12374 config.go:182] Loaded profile config "functional-237456": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (6.34s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-237456 get po -A
E1210 05:50:58.626385   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.6s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 cache add registry.k8s.io/pause:latest
E1210 05:51:01.188674   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.60s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.94s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-237456 /tmp/TestFunctionalserialCacheCmdcacheadd_local2081236343/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 cache add minikube-local-cache-test:functional-237456
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 cache delete minikube-local-cache-test:functional-237456
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-237456
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.94s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.6s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-237456 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (292.896338ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.60s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 kubectl -- --context functional-237456 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-237456 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.19s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-237456 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1210 05:51:06.310231   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:51:16.552581   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:51:37.034437   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-237456 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.185305757s)
functional_test.go:776: restart took 41.185428803s for "functional-237456" cluster.
I1210 05:51:45.782876   12374 config.go:182] Loaded profile config "functional-237456": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (41.19s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-237456 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.23s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-237456 logs: (1.230967171s)
--- PASS: TestFunctional/serial/LogsCmd (1.23s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.25s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 logs --file /tmp/TestFunctionalserialLogsFileCmd1781004742/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-237456 logs --file /tmp/TestFunctionalserialLogsFileCmd1781004742/001/logs.txt: (1.247669817s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.25s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.89s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-237456 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-237456
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-237456: exit status 115 (348.139271ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31518 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-237456 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.89s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-237456 config get cpus: exit status 14 (74.005136ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-237456 config get cpus: exit status 14 (73.781816ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (5.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-237456 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-237456 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 52362: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (5.58s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-237456 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-237456 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (196.356992ms)

                                                
                                                
-- stdout --
	* [functional-237456] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22089
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22089-8832/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-8832/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 05:52:16.975583   50987 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:52:16.975724   50987 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:52:16.975736   50987 out.go:374] Setting ErrFile to fd 2...
	I1210 05:52:16.975743   50987 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:52:16.976013   50987 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8832/.minikube/bin
	I1210 05:52:16.976718   50987 out.go:368] Setting JSON to false
	I1210 05:52:16.978035   50987 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2088,"bootTime":1765343849,"procs":253,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 05:52:16.978124   50987 start.go:143] virtualization: kvm guest
	I1210 05:52:16.979709   50987 out.go:179] * [functional-237456] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 05:52:16.981101   50987 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 05:52:16.981133   50987 notify.go:221] Checking for updates...
	I1210 05:52:16.985662   50987 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 05:52:16.987849   50987 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-8832/kubeconfig
	I1210 05:52:16.989311   50987 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-8832/.minikube
	I1210 05:52:16.991331   50987 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 05:52:16.992734   50987 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 05:52:16.994424   50987 config.go:182] Loaded profile config "functional-237456": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 05:52:16.995214   50987 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 05:52:17.025534   50987 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 05:52:17.025638   50987 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:52:17.094629   50987 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-10 05:52:17.082680055 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 05:52:17.094775   50987 docker.go:319] overlay module found
	I1210 05:52:17.096678   50987 out.go:179] * Using the docker driver based on existing profile
	I1210 05:52:17.098341   50987 start.go:309] selected driver: docker
	I1210 05:52:17.098361   50987 start.go:927] validating driver "docker" against &{Name:functional-237456 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-237456 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:52:17.098494   50987 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 05:52:17.100393   50987 out.go:203] 
	W1210 05:52:17.101713   50987 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1210 05:52:17.102982   50987 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-237456 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-237456 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-237456 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (195.337498ms)

                                                
                                                
-- stdout --
	* [functional-237456] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22089
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22089-8832/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-8832/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 05:52:16.604632   50701 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:52:16.604740   50701 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:52:16.604748   50701 out.go:374] Setting ErrFile to fd 2...
	I1210 05:52:16.604752   50701 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:52:16.605060   50701 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8832/.minikube/bin
	I1210 05:52:16.605517   50701 out.go:368] Setting JSON to false
	I1210 05:52:16.606421   50701 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2088,"bootTime":1765343849,"procs":251,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 05:52:16.606500   50701 start.go:143] virtualization: kvm guest
	I1210 05:52:16.612003   50701 out.go:179] * [functional-237456] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1210 05:52:16.613385   50701 notify.go:221] Checking for updates...
	I1210 05:52:16.613418   50701 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 05:52:16.616719   50701 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 05:52:16.618115   50701 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-8832/kubeconfig
	I1210 05:52:16.619567   50701 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-8832/.minikube
	I1210 05:52:16.620844   50701 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 05:52:16.622076   50701 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 05:52:16.623781   50701 config.go:182] Loaded profile config "functional-237456": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 05:52:16.624369   50701 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 05:52:16.652560   50701 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 05:52:16.652690   50701 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:52:16.717866   50701 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-10 05:52:16.707571463 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 05:52:16.718016   50701 docker.go:319] overlay module found
	I1210 05:52:16.720320   50701 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1210 05:52:16.721706   50701 start.go:309] selected driver: docker
	I1210 05:52:16.721723   50701 start.go:927] validating driver "docker" against &{Name:functional-237456 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-237456 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:52:16.721831   50701 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 05:52:16.723901   50701 out.go:203] 
	W1210 05:52:16.725229   50701 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1210 05:52:16.726417   50701 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-237456 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-237456 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-7d85dfc575-j5pl9" [8853fffb-88cf-4094-8013-3cf3c67c7fc0] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-7d85dfc575-j5pl9" [8853fffb-88cf-4094-8013-3cf3c67c7fc0] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.003057606s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:31689
functional_test.go:1680: http://192.168.49.2:31689: success! body:
Request served by hello-node-connect-7d85dfc575-j5pl9

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:31689
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.75s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (20.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [41393628-6057-4b3b-88ec-edf749742c1f] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004637268s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-237456 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-237456 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-237456 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-237456 apply -f testdata/storage-provisioner/pod.yaml
I1210 05:52:01.045999   12374 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [8117dc8a-966f-4525-a931-3aecf49e28b1] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [8117dc8a-966f-4525-a931-3aecf49e28b1] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.004010576s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-237456 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-237456 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-237456 delete -f testdata/storage-provisioner/pod.yaml: (1.153428153s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-237456 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [ed54e153-0fb4-4eb8-a539-d0506c22de5b] Pending
helpers_test.go:353: "sp-pod" [ed54e153-0fb4-4eb8-a539-d0506c22de5b] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.00421728s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-237456 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (20.93s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 ssh -n functional-237456 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 cp functional-237456:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3170992951/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 ssh -n functional-237456 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 ssh -n functional-237456 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.22s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (23.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-237456 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-6bcdcbc558-c5tw2" [49bdfc54-ea68-4367-ae77-3e38aa77e207] Pending
helpers_test.go:353: "mysql-6bcdcbc558-c5tw2" [49bdfc54-ea68-4367-ae77-3e38aa77e207] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-6bcdcbc558-c5tw2" [49bdfc54-ea68-4367-ae77-3e38aa77e207] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 15.003928825s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-237456 exec mysql-6bcdcbc558-c5tw2 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-237456 exec mysql-6bcdcbc558-c5tw2 -- mysql -ppassword -e "show databases;": exit status 1 (102.338787ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1210 05:52:08.183277   12374 retry.go:31] will retry after 1.053908597s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-237456 exec mysql-6bcdcbc558-c5tw2 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-237456 exec mysql-6bcdcbc558-c5tw2 -- mysql -ppassword -e "show databases;": exit status 1 (142.602046ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1210 05:52:09.383385   12374 retry.go:31] will retry after 1.906988812s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-237456 exec mysql-6bcdcbc558-c5tw2 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-237456 exec mysql-6bcdcbc558-c5tw2 -- mysql -ppassword -e "show databases;": exit status 1 (97.587154ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1210 05:52:11.389228   12374 retry.go:31] will retry after 1.210370308s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-237456 exec mysql-6bcdcbc558-c5tw2 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-237456 exec mysql-6bcdcbc558-c5tw2 -- mysql -ppassword -e "show databases;": exit status 1 (97.01562ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1210 05:52:12.697118   12374 retry.go:31] will retry after 3.70000127s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-237456 exec mysql-6bcdcbc558-c5tw2 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (23.64s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/12374/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 ssh "sudo cat /etc/test/nested/copy/12374/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/12374.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 ssh "sudo cat /etc/ssl/certs/12374.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/12374.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 ssh "sudo cat /usr/share/ca-certificates/12374.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/123742.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 ssh "sudo cat /etc/ssl/certs/123742.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/123742.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 ssh "sudo cat /usr/share/ca-certificates/123742.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.16s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-237456 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-237456 ssh "sudo systemctl is-active docker": exit status 1 (318.632909ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-237456 ssh "sudo systemctl is-active containerd": exit status 1 (341.932687ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-237456 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
public.ecr.aws/nginx/nginx:alpine
public.ecr.aws/docker/library/mysql:8.4
localhost/minikube-local-cache-test:functional-237456
localhost/kicbase/echo-server:functional-237456
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-237456 image ls --format short --alsologtostderr:
I1210 05:52:18.523815   52341 out.go:360] Setting OutFile to fd 1 ...
I1210 05:52:18.524138   52341 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:52:18.524150   52341 out.go:374] Setting ErrFile to fd 2...
I1210 05:52:18.524156   52341 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:52:18.524494   52341 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8832/.minikube/bin
I1210 05:52:18.525090   52341 config.go:182] Loaded profile config "functional-237456": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1210 05:52:18.525196   52341 config.go:182] Loaded profile config "functional-237456": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1210 05:52:18.525659   52341 cli_runner.go:164] Run: docker container inspect functional-237456 --format={{.State.Status}}
I1210 05:52:18.546668   52341 ssh_runner.go:195] Run: systemctl --version
I1210 05:52:18.546730   52341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-237456
I1210 05:52:18.567531   52341 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/functional-237456/id_rsa Username:docker}
I1210 05:52:18.666643   52341 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-237456 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ public.ecr.aws/nginx/nginx              │ alpine             │ d4918ca78576a │ 54.2MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-proxy              │ v1.34.2            │ 8aa150647e88a │ 73.1MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.2            │ 88320b5498ff2 │ 53.8MB │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ localhost/minikube-local-cache-test     │ functional-237456  │ 1214db09399bc │ 3.33kB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.2            │ 01e8bacf0f500 │ 76MB   │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.94MB │
│ localhost/kicbase/echo-server           │ functional-237456  │ 9056ab77afb8e │ 4.94MB │
│ public.ecr.aws/docker/library/mysql     │ 8.4                │ 20d0be4ee4524 │ 804MB  │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ a3e246e9556e9 │ 63.6MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.2            │ a5f569d49a979 │ 89MB   │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-237456 image ls --format table --alsologtostderr:
I1210 05:52:19.021217   52653 out.go:360] Setting OutFile to fd 1 ...
I1210 05:52:19.021435   52653 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:52:19.021445   52653 out.go:374] Setting ErrFile to fd 2...
I1210 05:52:19.021449   52653 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:52:19.021794   52653 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8832/.minikube/bin
I1210 05:52:19.022531   52653 config.go:182] Loaded profile config "functional-237456": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1210 05:52:19.022672   52653 config.go:182] Loaded profile config "functional-237456": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1210 05:52:19.023241   52653 cli_runner.go:164] Run: docker container inspect functional-237456 --format={{.State.Status}}
I1210 05:52:19.047334   52653 ssh_runner.go:195] Run: systemctl --version
I1210 05:52:19.047386   52653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-237456
I1210 05:52:19.072798   52653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/functional-237456/id_rsa Username:docker}
I1210 05:52:19.176782   52653 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-237456 image ls --format json --alsologtostderr:
[{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438","repoDigests":["public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036","public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba15
36c7afaaed679fe0c1578ea0e3c233"],"repoTags":["public.ecr.aws/docker/library/mysql:8.4"],"size":"803724943"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"63585106"},{"id":"88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952","repoDigests":["registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6","regis
try.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5dcbb44eeaad051d21560c6d9aa0051cc68ed281a4c26dda"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"53848919"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85","repoDigests":["registry.k8s.io/kube-apise
rver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077","registry.k8s.io/kube-apiserver@sha256:f0e0dc00029af1a9258587ef181f17a9eb7605d3d69a72668f4f6709f72005fd"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"89046001"},{"id":"8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45","repoDigests":["registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74","registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"73145240"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localh
ost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-237456"],"size":"4943877"},{"id":"01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb","registry.k8s.io/kube-controller-manager@sha256:9eb769377f8fdeab9e1428194e2b7d19584b63a5fda8f2f406900ee7893c2f4e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"76004183"},{"id":"1214db09399bc8042783362d9ebda6a0b760b7fc5fcd8ec4bf93be71a3e6d432","repoDigests":["localhost/minikube-local-cache-test@sha256:e1c8e3aa2518f8177a484ce4a0e3b68389c603142a3f9c18677a6b2d3970cab1"],
"repoTags":["localhost/minikube-local-cache-test:functional-237456"],"size":"3330"},{"id":"d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9","repoDigests":["public.ecr.aws/nginx/nginx@sha256:97a145fb5809fd90ebdf66711f69b97e29ea99da5403c20310dcc425974a14f9","public.ecr.aws/nginx/nginx@sha256:b7198452993fe37c15651e967713dd500eb4367f80a2d63c3bb5b172e46fc3b5"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"54242145"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["regist
ry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-237456 image ls --format json --alsologtostderr:
I1210 05:52:18.772125   52457 out.go:360] Setting OutFile to fd 1 ...
I1210 05:52:18.772426   52457 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:52:18.772437   52457 out.go:374] Setting ErrFile to fd 2...
I1210 05:52:18.772441   52457 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:52:18.772669   52457 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8832/.minikube/bin
I1210 05:52:18.773167   52457 config.go:182] Loaded profile config "functional-237456": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1210 05:52:18.773252   52457 config.go:182] Loaded profile config "functional-237456": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1210 05:52:18.773777   52457 cli_runner.go:164] Run: docker container inspect functional-237456 --format={{.State.Status}}
I1210 05:52:18.796043   52457 ssh_runner.go:195] Run: systemctl --version
I1210 05:52:18.796093   52457 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-237456
I1210 05:52:18.817798   52457 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/functional-237456/id_rsa Username:docker}
I1210 05:52:18.919095   52457 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-237456 image ls --format yaml --alsologtostderr:
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 1214db09399bc8042783362d9ebda6a0b760b7fc5fcd8ec4bf93be71a3e6d432
repoDigests:
- localhost/minikube-local-cache-test@sha256:e1c8e3aa2518f8177a484ce4a0e3b68389c603142a3f9c18677a6b2d3970cab1
repoTags:
- localhost/minikube-local-cache-test:functional-237456
size: "3330"
- id: d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:97a145fb5809fd90ebdf66711f69b97e29ea99da5403c20310dcc425974a14f9
- public.ecr.aws/nginx/nginx@sha256:b7198452993fe37c15651e967713dd500eb4367f80a2d63c3bb5b172e46fc3b5
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "54242145"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6
- registry.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5dcbb44eeaad051d21560c6d9aa0051cc68ed281a4c26dda
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "53848919"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-237456
size: "4943877"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63585106"
- id: a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077
- registry.k8s.io/kube-apiserver@sha256:f0e0dc00029af1a9258587ef181f17a9eb7605d3d69a72668f4f6709f72005fd
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "89046001"
- id: 8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45
repoDigests:
- registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74
- registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "73145240"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438
repoDigests:
- public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036
- public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233
repoTags:
- public.ecr.aws/docker/library/mysql:8.4
size: "803724943"
- id: 01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb
- registry.k8s.io/kube-controller-manager@sha256:9eb769377f8fdeab9e1428194e2b7d19584b63a5fda8f2f406900ee7893c2f4e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "76004183"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-237456 image ls --format yaml --alsologtostderr:
I1210 05:52:18.528337   52347 out.go:360] Setting OutFile to fd 1 ...
I1210 05:52:18.528455   52347 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:52:18.528461   52347 out.go:374] Setting ErrFile to fd 2...
I1210 05:52:18.528488   52347 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:52:18.528763   52347 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8832/.minikube/bin
I1210 05:52:18.529324   52347 config.go:182] Loaded profile config "functional-237456": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1210 05:52:18.529446   52347 config.go:182] Loaded profile config "functional-237456": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1210 05:52:18.529899   52347 cli_runner.go:164] Run: docker container inspect functional-237456 --format={{.State.Status}}
I1210 05:52:18.550688   52347 ssh_runner.go:195] Run: systemctl --version
I1210 05:52:18.550736   52347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-237456
I1210 05:52:18.571156   52347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/functional-237456/id_rsa Username:docker}
I1210 05:52:18.668551   52347 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-237456 ssh pgrep buildkitd: exit status 1 (292.33988ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 image build -t localhost/my-image:functional-237456 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-237456 image build -t localhost/my-image:functional-237456 testdata/build --alsologtostderr: (3.677832901s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-237456 image build -t localhost/my-image:functional-237456 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 8eb9b991cc3
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-237456
--> 0eb9f1488c7
Successfully tagged localhost/my-image:functional-237456
0eb9f1488c752ba1298fb53ecfb26db44c6a081f8061f14c72f384c1dcadb999
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-237456 image build -t localhost/my-image:functional-237456 testdata/build --alsologtostderr:
I1210 05:52:19.073565   52672 out.go:360] Setting OutFile to fd 1 ...
I1210 05:52:19.073912   52672 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:52:19.073924   52672 out.go:374] Setting ErrFile to fd 2...
I1210 05:52:19.073931   52672 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:52:19.074243   52672 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8832/.minikube/bin
I1210 05:52:19.075191   52672 config.go:182] Loaded profile config "functional-237456": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1210 05:52:19.075814   52672 config.go:182] Loaded profile config "functional-237456": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1210 05:52:19.076262   52672 cli_runner.go:164] Run: docker container inspect functional-237456 --format={{.State.Status}}
I1210 05:52:19.096323   52672 ssh_runner.go:195] Run: systemctl --version
I1210 05:52:19.096391   52672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-237456
I1210 05:52:19.118655   52672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/functional-237456/id_rsa Username:docker}
I1210 05:52:19.226399   52672 build_images.go:162] Building image from path: /tmp/build.3287560387.tar
I1210 05:52:19.226463   52672 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1210 05:52:19.238060   52672 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3287560387.tar
I1210 05:52:19.243083   52672 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3287560387.tar: stat -c "%s %y" /var/lib/minikube/build/build.3287560387.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3287560387.tar': No such file or directory
I1210 05:52:19.243141   52672 ssh_runner.go:362] scp /tmp/build.3287560387.tar --> /var/lib/minikube/build/build.3287560387.tar (3072 bytes)
I1210 05:52:19.267207   52672 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3287560387
I1210 05:52:19.278014   52672 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3287560387 -xf /var/lib/minikube/build/build.3287560387.tar
I1210 05:52:19.289064   52672 crio.go:315] Building image: /var/lib/minikube/build/build.3287560387
I1210 05:52:19.289128   52672 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-237456 /var/lib/minikube/build/build.3287560387 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1210 05:52:22.645207   52672 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-237456 /var/lib/minikube/build/build.3287560387 --cgroup-manager=cgroupfs: (3.356046197s)
I1210 05:52:22.645274   52672 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3287560387
I1210 05:52:22.656718   52672 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3287560387.tar
I1210 05:52:22.665604   52672 build_images.go:218] Built localhost/my-image:functional-237456 from /tmp/build.3287560387.tar
I1210 05:52:22.665637   52672 build_images.go:134] succeeded building to: functional-237456
I1210 05:52:22.665643   52672 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 image ls
2025/12/10 05:52:22 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-237456
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 update-context --alsologtostderr -v=2
E1210 05:52:17.996134   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 image load --daemon kicbase/echo-server:functional-237456 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-237456 image load --daemon kicbase/echo-server:functional-237456 --alsologtostderr: (1.146173615s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-237456 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-237456 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-237456 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 46505: os: process already finished
helpers_test.go:520: unable to terminate pid 46236: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-237456 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-237456 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (13.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-237456 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [15170482-e2f2-4d73-91b8-780d88fd765a] Pending
helpers_test.go:353: "nginx-svc" [15170482-e2f2-4d73-91b8-780d88fd765a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [15170482-e2f2-4d73-91b8-780d88fd765a] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 13.004241476s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (13.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-237456
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 image load --daemon kicbase/echo-server:functional-237456 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 image save kicbase/echo-server:functional-237456 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 image rm kicbase/echo-server:functional-237456 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-237456
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 image save --daemon kicbase/echo-server:functional-237456 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-237456
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-237456 /tmp/TestFunctionalparallelMountCmdany-port1731270035/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765345924167935383" to /tmp/TestFunctionalparallelMountCmdany-port1731270035/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765345924167935383" to /tmp/TestFunctionalparallelMountCmdany-port1731270035/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765345924167935383" to /tmp/TestFunctionalparallelMountCmdany-port1731270035/001/test-1765345924167935383
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-237456 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (296.183156ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1210 05:52:04.464477   12374 retry.go:31] will retry after 579.261252ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 10 05:52 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 10 05:52 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 10 05:52 test-1765345924167935383
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 ssh cat /mount-9p/test-1765345924167935383
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-237456 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [c9374749-feda-4fb5-89e7-524263216c78] Pending
helpers_test.go:353: "busybox-mount" [c9374749-feda-4fb5-89e7-524263216c78] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [c9374749-feda-4fb5-89e7-524263216c78] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
I1210 05:52:08.998259   12374 kapi.go:150] Service nginx-svc in namespace default found.
helpers_test.go:353: "busybox-mount" [c9374749-feda-4fb5-89e7-524263216c78] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003563594s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-237456 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 ssh stat /mount-9p/created-by-pod
I1210 05:52:10.431965   12374 detect.go:223] nested VM detected
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-237456 /tmp/TestFunctionalparallelMountCmdany-port1731270035/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.88s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-237456 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.111.13.193 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-237456 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-237456 /tmp/TestFunctionalparallelMountCmdspecific-port953037211/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-237456 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (284.129213ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1210 05:52:11.328219   12374 retry.go:31] will retry after 663.461405ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-237456 /tmp/TestFunctionalparallelMountCmdspecific-port953037211/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-237456 ssh "sudo umount -f /mount-9p": exit status 1 (286.575375ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-237456 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-237456 /tmp/TestFunctionalparallelMountCmdspecific-port953037211/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-237456 /tmp/TestFunctionalparallelMountCmdVerifyCleanup192234034/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-237456 /tmp/TestFunctionalparallelMountCmdVerifyCleanup192234034/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-237456 /tmp/TestFunctionalparallelMountCmdVerifyCleanup192234034/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-237456 ssh "findmnt -T" /mount1: exit status 1 (343.484914ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1210 05:52:13.392753   12374 retry.go:31] will retry after 299.336194ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-237456 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-237456 /tmp/TestFunctionalparallelMountCmdVerifyCleanup192234034/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-237456 /tmp/TestFunctionalparallelMountCmdVerifyCleanup192234034/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-237456 /tmp/TestFunctionalparallelMountCmdVerifyCleanup192234034/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-237456 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-237456 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-75c85bcc94-l7hww" [481e6803-1da6-465b-ade9-a5952247bc75] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-75c85bcc94-l7hww" [481e6803-1da6-465b-ade9-a5952247bc75] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.003896744s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.14s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "401.988488ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "74.999984ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "380.61315ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "74.492624ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-237456 service list: (1.857578766s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.86s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-237456 service list -o json: (1.715603298s)
functional_test.go:1504: Took "1.715703254s" to run "out/minikube-linux-amd64 -p functional-237456 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:30577
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-237456 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30577
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.53s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-237456
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-237456
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-237456
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22089-8832/.minikube/files/etc/test/nested/copy/12374/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (37.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-228089 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-228089 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (37.415481262s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (37.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (6.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1210 05:53:07.090663   12374 config.go:182] Loaded profile config "functional-228089": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-228089 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-228089 --alsologtostderr -v=8: (6.220327412s)
functional_test.go:678: soft start took 6.221019094s for "functional-228089" cluster.
I1210 05:53:13.311711   12374 config.go:182] Loaded profile config "functional-228089": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (6.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-228089 get po -A
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (2.6s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (2.60s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (0.9s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-228089 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach1890368618/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 cache add minikube-local-cache-test:functional-228089
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 cache delete minikube-local-cache-test:functional-228089
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-228089
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (0.90s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.56s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-228089 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (287.384305ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.56s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 kubectl -- --context functional-228089 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-228089 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (61.75s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-228089 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1210 05:53:39.920664   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-228089 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m1.751001314s)
functional_test.go:776: restart took 1m1.751181708s for "functional-228089" cluster.
I1210 05:54:21.048299   12374 config.go:182] Loaded profile config "functional-228089": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (61.75s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-228089 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-228089 logs: (1.286312809s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs1064920824/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-228089 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs1064920824/001/logs.txt: (1.302638779s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (4.09s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-228089 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-228089
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-228089: exit status 115 (346.790495ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31779 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-228089 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (4.09s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-228089 config get cpus: exit status 14 (96.333682ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-228089 config get cpus: exit status 14 (82.671459ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (7.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-228089 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-228089 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 69411: os: process already finished
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (7.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-228089 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-228089 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (195.651049ms)

                                                
                                                
-- stdout --
	* [functional-228089] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22089
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22089-8832/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-8832/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 05:54:39.264153   66915 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:54:39.264310   66915 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:54:39.264324   66915 out.go:374] Setting ErrFile to fd 2...
	I1210 05:54:39.264329   66915 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:54:39.264621   66915 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8832/.minikube/bin
	I1210 05:54:39.265183   66915 out.go:368] Setting JSON to false
	I1210 05:54:39.266484   66915 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2230,"bootTime":1765343849,"procs":258,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 05:54:39.266566   66915 start.go:143] virtualization: kvm guest
	I1210 05:54:39.268660   66915 out.go:179] * [functional-228089] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 05:54:39.270262   66915 notify.go:221] Checking for updates...
	I1210 05:54:39.270280   66915 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 05:54:39.271686   66915 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 05:54:39.273308   66915 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-8832/kubeconfig
	I1210 05:54:39.274686   66915 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-8832/.minikube
	I1210 05:54:39.276010   66915 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 05:54:39.277445   66915 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 05:54:39.279441   66915 config.go:182] Loaded profile config "functional-228089": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 05:54:39.280272   66915 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 05:54:39.309960   66915 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 05:54:39.310056   66915 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:54:39.370362   66915 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-10 05:54:39.359195624 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 05:54:39.370532   66915 docker.go:319] overlay module found
	I1210 05:54:39.372503   66915 out.go:179] * Using the docker driver based on existing profile
	I1210 05:54:39.373695   66915 start.go:309] selected driver: docker
	I1210 05:54:39.373709   66915 start.go:927] validating driver "docker" against &{Name:functional-228089 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-228089 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:54:39.373803   66915 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 05:54:39.376082   66915 out.go:203] 
	W1210 05:54:39.381656   66915 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1210 05:54:39.383438   66915 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-228089 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-228089 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-228089 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (184.82428ms)

                                                
                                                
-- stdout --
	* [functional-228089] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22089
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22089-8832/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-8832/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 05:54:39.475276   67157 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:54:39.475554   67157 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:54:39.475564   67157 out.go:374] Setting ErrFile to fd 2...
	I1210 05:54:39.475569   67157 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:54:39.475875   67157 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8832/.minikube/bin
	I1210 05:54:39.476364   67157 out.go:368] Setting JSON to false
	I1210 05:54:39.477389   67157 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2230,"bootTime":1765343849,"procs":260,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 05:54:39.477481   67157 start.go:143] virtualization: kvm guest
	I1210 05:54:39.478953   67157 out.go:179] * [functional-228089] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1210 05:54:39.480003   67157 notify.go:221] Checking for updates...
	I1210 05:54:39.480018   67157 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 05:54:39.481289   67157 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 05:54:39.482478   67157 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-8832/kubeconfig
	I1210 05:54:39.483644   67157 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-8832/.minikube
	I1210 05:54:39.484740   67157 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 05:54:39.486043   67157 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 05:54:39.487557   67157 config.go:182] Loaded profile config "functional-228089": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 05:54:39.488397   67157 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 05:54:39.514494   67157 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 05:54:39.514603   67157 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:54:39.579959   67157 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-10 05:54:39.569167485 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 05:54:39.580080   67157 docker.go:319] overlay module found
	I1210 05:54:39.581819   67157 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1210 05:54:39.582951   67157 start.go:309] selected driver: docker
	I1210 05:54:39.582968   67157 start.go:927] validating driver "docker" against &{Name:functional-228089 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-228089 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:54:39.583073   67157 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 05:54:39.585012   67157 out.go:203] 
	W1210 05:54:39.586398   67157 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1210 05:54:39.588429   67157 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (1.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 status -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (1.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (7.57s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-228089 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-228089 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-9f67c86d4-2h4g5" [ecbbe16f-cf77-495b-adf0-b2f2440f4fb5] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-9f67c86d4-2h4g5" [ecbbe16f-cf77-495b-adf0-b2f2440f4fb5] Running
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.003829431s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:32018
functional_test.go:1680: http://192.168.49.2:32018: success! body:
Request served by hello-node-connect-9f67c86d4-2h4g5

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:32018
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (7.57s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (22.69s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [db38a871-12de-4ec5-9b99-674c0ac9c484] Running
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003362699s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-228089 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-228089 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-228089 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-228089 apply -f testdata/storage-provisioner/pod.yaml
I1210 05:54:34.820061   12374 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [4bb2c061-6d4d-44a6-a9c5-6c023a23469a] Pending
helpers_test.go:353: "sp-pod" [4bb2c061-6d4d-44a6-a9c5-6c023a23469a] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.008678086s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-228089 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-228089 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-228089 apply -f testdata/storage-provisioner/pod.yaml
I1210 05:54:42.028241   12374 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [67294a2d-794d-4689-9a27-e7dc67f8e204] Pending
helpers_test.go:353: "sp-pod" [67294a2d-794d-4689-9a27-e7dc67f8e204] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [67294a2d-794d-4689-9a27-e7dc67f8e204] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.004173084s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-228089 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (22.69s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.66s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.66s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.85s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 ssh -n functional-228089 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 cp functional-228089:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp126282396/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 ssh -n functional-228089 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 ssh -n functional-228089 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.85s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (22.76s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-228089 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-7d7b65bc95-jqmnm" [c45f6c05-b822-48cb-afec-6d13349fa957] Pending
helpers_test.go:353: "mysql-7d7b65bc95-jqmnm" [c45f6c05-b822-48cb-afec-6d13349fa957] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-7d7b65bc95-jqmnm" [c45f6c05-b822-48cb-afec-6d13349fa957] Running
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: app=mysql healthy within 14.011348639s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-228089 exec mysql-7d7b65bc95-jqmnm -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-228089 exec mysql-7d7b65bc95-jqmnm -- mysql -ppassword -e "show databases;": exit status 1 (228.654278ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1210 05:54:54.080047   12374 retry.go:31] will retry after 1.420436162s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-228089 exec mysql-7d7b65bc95-jqmnm -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-228089 exec mysql-7d7b65bc95-jqmnm -- mysql -ppassword -e "show databases;": exit status 1 (120.998444ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1210 05:54:55.621739   12374 retry.go:31] will retry after 1.2067077s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-228089 exec mysql-7d7b65bc95-jqmnm -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-228089 exec mysql-7d7b65bc95-jqmnm -- mysql -ppassword -e "show databases;": exit status 1 (102.91334ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1210 05:54:56.931789   12374 retry.go:31] will retry after 2.139024973s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-228089 exec mysql-7d7b65bc95-jqmnm -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-228089 exec mysql-7d7b65bc95-jqmnm -- mysql -ppassword -e "show databases;": exit status 1 (92.240513ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1210 05:54:59.164345   12374 retry.go:31] will retry after 3.136964584s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-228089 exec mysql-7d7b65bc95-jqmnm -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (22.76s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/12374/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 ssh "sudo cat /etc/test/nested/copy/12374/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.89s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/12374.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 ssh "sudo cat /etc/ssl/certs/12374.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/12374.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 ssh "sudo cat /usr/share/ca-certificates/12374.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/123742.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 ssh "sudo cat /etc/ssl/certs/123742.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/123742.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 ssh "sudo cat /usr/share/ca-certificates/123742.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.89s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-228089 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.59s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-228089 ssh "sudo systemctl is-active docker": exit status 1 (285.650996ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-228089 ssh "sudo systemctl is-active containerd": exit status 1 (307.811281ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.59s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (7.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-228089 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-228089 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-5758569b79-dslj2" [68a48b81-255e-4a5d-9572-0383dac6e7a8] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-5758569b79-dslj2" [68a48b81-255e-4a5d-9572-0383dac6e7a8] Running
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.003497598s
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (7.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-228089 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-228089 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-228089 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-228089 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 63128: os: process already finished
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-228089 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (8.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-228089 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [166bcf23-a2a4-4c44-8078-640fd0a74680] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [166bcf23-a2a4-4c44-8078-640fd0a74680] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.003189804s
I1210 05:54:36.976413   12374 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (8.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.51s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 service list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.51s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 service list -o json
functional_test.go:1504: Took "498.351467ms" to run "out/minikube-linux-amd64 -p functional-228089 service list -o json"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:31694
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 service hello-node --url --format={{.IP}}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31694
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-228089 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.98.79.147 is working!
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-228089 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "377.757772ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "74.568719ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (13.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-228089 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3102941779/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765346078251589460" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3102941779/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765346078251589460" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3102941779/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765346078251589460" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3102941779/001/test-1765346078251589460
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-228089 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (330.377592ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1210 05:54:38.582352   12374 retry.go:31] will retry after 678.345043ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 10 05:54 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 10 05:54 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 10 05:54 test-1765346078251589460
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 ssh cat /mount-9p/test-1765346078251589460
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-228089 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [94bcc66c-69e4-4940-ab7a-d2fad41a0be1] Pending
helpers_test.go:353: "busybox-mount" [94bcc66c-69e4-4940-ab7a-d2fad41a0be1] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [94bcc66c-69e4-4940-ab7a-d2fad41a0be1] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [94bcc66c-69e4-4940-ab7a-d2fad41a0be1] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 10.003475559s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-228089 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-228089 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3102941779/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (13.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "402.945813ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "69.788036ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.62s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.62s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (1.69s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 image ls --format short --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p functional-228089 image ls --format short --alsologtostderr: (1.684788804s)
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-228089 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
public.ecr.aws/nginx/nginx:alpine
public.ecr.aws/docker/library/mysql:8.4
localhost/minikube-local-cache-test:functional-228089
localhost/kicbase/echo-server:functional-228089
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-228089 image ls --format short --alsologtostderr:
I1210 05:54:52.531622   70400 out.go:360] Setting OutFile to fd 1 ...
I1210 05:54:52.531728   70400 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:54:52.531733   70400 out.go:374] Setting ErrFile to fd 2...
I1210 05:54:52.531737   70400 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:54:52.531917   70400 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8832/.minikube/bin
I1210 05:54:52.532481   70400 config.go:182] Loaded profile config "functional-228089": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1210 05:54:52.532614   70400 config.go:182] Loaded profile config "functional-228089": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1210 05:54:52.533070   70400 cli_runner.go:164] Run: docker container inspect functional-228089 --format={{.State.Status}}
I1210 05:54:52.555150   70400 ssh_runner.go:195] Run: systemctl --version
I1210 05:54:52.555198   70400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-228089
I1210 05:54:52.578030   70400 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/functional-228089/id_rsa Username:docker}
I1210 05:54:52.682260   70400 ssh_runner.go:195] Run: sudo crictl images --output json
I1210 05:54:54.136631   70400 ssh_runner.go:235] Completed: sudo crictl images --output json: (1.454305408s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (1.69s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-228089 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ public.ecr.aws/nginx/nginx              │ alpine             │ d4918ca78576a │ 54.2MB │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ a3e246e9556e9 │ 63.6MB │
│ registry.k8s.io/kube-apiserver          │ v1.35.0-beta.0     │ aa9d02839d8de │ 90.8MB │
│ registry.k8s.io/kube-controller-manager │ v1.35.0-beta.0     │ 45f3cc72d235f │ 76.9MB │
│ registry.k8s.io/kube-proxy              │ v1.35.0-beta.0     │ 8a4ded35a3eb1 │ 72MB   │
│ public.ecr.aws/docker/library/mysql     │ 8.4                │ 20d0be4ee4524 │ 804MB  │
│ registry.k8s.io/kube-scheduler          │ v1.35.0-beta.0     │ 7bb6219ddab95 │ 52.7MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.94MB │
│ localhost/kicbase/echo-server           │ functional-228089  │ 9056ab77afb8e │ 4.94MB │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ localhost/minikube-local-cache-test     │ functional-228089  │ 1214db09399bc │ 3.33kB │
│ registry.k8s.io/coredns/coredns         │ v1.13.1            │ aa5e3ebc0dfed │ 79.2MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-228089 image ls --format table --alsologtostderr:
I1210 05:54:55.851413   71902 out.go:360] Setting OutFile to fd 1 ...
I1210 05:54:55.851656   71902 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:54:55.851664   71902 out.go:374] Setting ErrFile to fd 2...
I1210 05:54:55.851668   71902 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:54:55.851842   71902 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8832/.minikube/bin
I1210 05:54:55.852380   71902 config.go:182] Loaded profile config "functional-228089": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1210 05:54:55.852482   71902 config.go:182] Loaded profile config "functional-228089": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1210 05:54:55.852864   71902 cli_runner.go:164] Run: docker container inspect functional-228089 --format={{.State.Status}}
I1210 05:54:55.870198   71902 ssh_runner.go:195] Run: systemctl --version
I1210 05:54:55.870256   71902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-228089
I1210 05:54:55.889837   71902 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/functional-228089/id_rsa Username:docker}
I1210 05:54:55.987966   71902 ssh_runner.go:195] Run: sudo crictl images --output json
2025/12/10 05:54:56 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-228089 image ls --format json --alsologtostderr:
[{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"1214db09399bc8042783362d9ebda6a0b760b7fc5fcd8ec4bf93be71a3e6d432","repoDigests":["localhost/minikube-local-cache-test@sha256:e1c8e3aa2518f8177a484ce4a0e3b68389c603142a3f9c18677a6b2d3970cab1"],"repoTags":["localhost/minikube-local-cache-test:functional-228089"],"size":"
3330"},{"id":"d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9","repoDigests":["public.ecr.aws/nginx/nginx@sha256:97a145fb5809fd90ebdf66711f69b97e29ea99da5403c20310dcc425974a14f9","public.ecr.aws/nginx/nginx@sha256:b7198452993fe37c15651e967713dd500eb4367f80a2d63c3bb5b172e46fc3b5"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"54242145"},{"id":"aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":["registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7","registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"79193994"},{"id":"45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d","registry.k8s.io/kube-controller-manager@sha256:ca8b699e445178c1fc4a8f31245d6bd7b
d97192cc7b43baa2360522e09b55581"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"76872535"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"20d0be4ee45242
864913b12e7dc544f29f94117c9846c6a6b73d416670d42438","repoDigests":["public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036","public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233"],"repoTags":["public.ecr.aws/docker/library/mysql:8.4"],"size":"803724943"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"63585106"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo
-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-228089"],"size":"4944818"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056
807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46","repoDigests":["registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6","registry.k8s.io/kube-scheduler@sha256:bb3d10b07de89c1e36a78794573fdbb7939a465d235a5bd164bae43aec22ee5b"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"52747095"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b","repoDigests":["registry.k8s.io/kube-apiserver@sha256:7
ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58","registry.k8s.io/kube-apiserver@sha256:c95487a138f982d925eb8c59c7fc40761c58af445463ac4df872aee36c5e999c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"90819569"},{"id":"8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810","repoDigests":["registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a","registry.k8s.io/kube-proxy@sha256:70a55889ba3d6b048529c8edae375ce2f20d1204f3bbcacd24e617abe8888b82"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"71977881"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-228089 image ls --format json --alsologtostderr:
I1210 05:54:55.625237   71841 out.go:360] Setting OutFile to fd 1 ...
I1210 05:54:55.625462   71841 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:54:55.625497   71841 out.go:374] Setting ErrFile to fd 2...
I1210 05:54:55.625505   71841 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:54:55.625670   71841 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8832/.minikube/bin
I1210 05:54:55.626184   71841 config.go:182] Loaded profile config "functional-228089": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1210 05:54:55.626268   71841 config.go:182] Loaded profile config "functional-228089": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1210 05:54:55.626702   71841 cli_runner.go:164] Run: docker container inspect functional-228089 --format={{.State.Status}}
I1210 05:54:55.646859   71841 ssh_runner.go:195] Run: systemctl --version
I1210 05:54:55.646903   71841 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-228089
I1210 05:54:55.664829   71841 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/functional-228089/id_rsa Username:docker}
I1210 05:54:55.759094   71841 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-228089 image ls --format yaml --alsologtostderr:
- id: aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "79193994"
- id: aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58
- registry.k8s.io/kube-apiserver@sha256:c95487a138f982d925eb8c59c7fc40761c58af445463ac4df872aee36c5e999c
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "90819569"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-228089
size: "4944818"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438
repoDigests:
- public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036
- public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233
repoTags:
- public.ecr.aws/docker/library/mysql:8.4
size: "803724943"
- id: d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:97a145fb5809fd90ebdf66711f69b97e29ea99da5403c20310dcc425974a14f9
- public.ecr.aws/nginx/nginx@sha256:b7198452993fe37c15651e967713dd500eb4367f80a2d63c3bb5b172e46fc3b5
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "54242145"
- id: 45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d
- registry.k8s.io/kube-controller-manager@sha256:ca8b699e445178c1fc4a8f31245d6bd7bd97192cc7b43baa2360522e09b55581
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "76872535"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63585106"
- id: 8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a
- registry.k8s.io/kube-proxy@sha256:70a55889ba3d6b048529c8edae375ce2f20d1204f3bbcacd24e617abe8888b82
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "71977881"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 1214db09399bc8042783362d9ebda6a0b760b7fc5fcd8ec4bf93be71a3e6d432
repoDigests:
- localhost/minikube-local-cache-test@sha256:e1c8e3aa2518f8177a484ce4a0e3b68389c603142a3f9c18677a6b2d3970cab1
repoTags:
- localhost/minikube-local-cache-test:functional-228089
size: "3330"
- id: 7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6
- registry.k8s.io/kube-scheduler@sha256:bb3d10b07de89c1e36a78794573fdbb7939a465d235a5bd164bae43aec22ee5b
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "52747095"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-228089 image ls --format yaml --alsologtostderr:
I1210 05:54:54.214539   71202 out.go:360] Setting OutFile to fd 1 ...
I1210 05:54:54.214627   71202 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:54:54.214634   71202 out.go:374] Setting ErrFile to fd 2...
I1210 05:54:54.214638   71202 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:54:54.214827   71202 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8832/.minikube/bin
I1210 05:54:54.215329   71202 config.go:182] Loaded profile config "functional-228089": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1210 05:54:54.215416   71202 config.go:182] Loaded profile config "functional-228089": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1210 05:54:54.215861   71202 cli_runner.go:164] Run: docker container inspect functional-228089 --format={{.State.Status}}
I1210 05:54:54.236731   71202 ssh_runner.go:195] Run: systemctl --version
I1210 05:54:54.236788   71202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-228089
I1210 05:54:54.257432   71202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/functional-228089/id_rsa Username:docker}
I1210 05:54:54.353521   71202 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (2.82s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-228089 ssh pgrep buildkitd: exit status 1 (330.187615ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 image build -t localhost/my-image:functional-228089 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-228089 image build -t localhost/my-image:functional-228089 testdata/build --alsologtostderr: (2.252087717s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-228089 image build -t localhost/my-image:functional-228089 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 6aaede9ced2
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-228089
--> a13e3af585c
Successfully tagged localhost/my-image:functional-228089
a13e3af585c36785191da988269a56ced2ce1abbe846c24657a74a11018a83b4
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-228089 image build -t localhost/my-image:functional-228089 testdata/build --alsologtostderr:
I1210 05:54:54.792591   71469 out.go:360] Setting OutFile to fd 1 ...
I1210 05:54:54.792678   71469 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:54:54.792685   71469 out.go:374] Setting ErrFile to fd 2...
I1210 05:54:54.792689   71469 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:54:54.792934   71469 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8832/.minikube/bin
I1210 05:54:54.793551   71469 config.go:182] Loaded profile config "functional-228089": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1210 05:54:54.794333   71469 config.go:182] Loaded profile config "functional-228089": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1210 05:54:54.794941   71469 cli_runner.go:164] Run: docker container inspect functional-228089 --format={{.State.Status}}
I1210 05:54:54.818817   71469 ssh_runner.go:195] Run: systemctl --version
I1210 05:54:54.818874   71469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-228089
I1210 05:54:54.840202   71469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/functional-228089/id_rsa Username:docker}
I1210 05:54:54.946058   71469 build_images.go:162] Building image from path: /tmp/build.2481147578.tar
I1210 05:54:54.946117   71469 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1210 05:54:54.956456   71469 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2481147578.tar
I1210 05:54:54.961836   71469 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2481147578.tar: stat -c "%s %y" /var/lib/minikube/build/build.2481147578.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2481147578.tar': No such file or directory
I1210 05:54:54.961873   71469 ssh_runner.go:362] scp /tmp/build.2481147578.tar --> /var/lib/minikube/build/build.2481147578.tar (3072 bytes)
I1210 05:54:54.990045   71469 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2481147578
I1210 05:54:55.001210   71469 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2481147578 -xf /var/lib/minikube/build/build.2481147578.tar
I1210 05:54:55.011669   71469 crio.go:315] Building image: /var/lib/minikube/build/build.2481147578
I1210 05:54:55.011734   71469 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-228089 /var/lib/minikube/build/build.2481147578 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1210 05:54:56.952579   71469 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-228089 /var/lib/minikube/build/build.2481147578 --cgroup-manager=cgroupfs: (1.940823946s)
I1210 05:54:56.952634   71469 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2481147578
I1210 05:54:56.961194   71469 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2481147578.tar
I1210 05:54:56.969120   71469 build_images.go:218] Built localhost/my-image:functional-228089 from /tmp/build.2481147578.tar
I1210 05:54:56.969154   71469 build_images.go:134] succeeded building to: functional-228089
I1210 05:54:56.969159   71469 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (2.82s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-228089
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 image load --daemon kicbase/echo-server:functional-228089 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-228089 image load --daemon kicbase/echo-server:functional-228089 --alsologtostderr: (1.057024604s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-228089
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 image load --daemon kicbase/echo-server:functional-228089 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-linux-amd64 -p functional-228089 image load --daemon kicbase/echo-server:functional-228089 --alsologtostderr: (1.019009609s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 image save kicbase/echo-server:functional-228089 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 image rm kicbase/echo-server:functional-228089 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.67s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.67s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-228089
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 image save --daemon kicbase/echo-server:functional-228089 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-228089
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (2.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-228089 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo274122247/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-228089 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (360.910812ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1210 05:54:51.811727   12374 retry.go:31] will retry after 489.013773ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-228089 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo274122247/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-228089 ssh "sudo umount -f /mount-9p": exit status 1 (330.256137ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-228089 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-228089 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo274122247/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (2.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.86s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-228089 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2374139309/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-228089 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2374139309/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-228089 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2374139309/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-228089 ssh "findmnt -T" /mount1: exit status 1 (425.470225ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1210 05:54:54.130771   12374 retry.go:31] will retry after 437.841188ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-228089 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-228089 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-228089 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2374139309/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-228089 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2374139309/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-228089 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2374139309/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.86s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-228089
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-228089
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-228089
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (108.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1210 05:55:56.056713   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:56:23.762231   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:56:53.076993   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/functional-237456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:56:53.083496   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/functional-237456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:56:53.094929   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/functional-237456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:56:53.116398   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/functional-237456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:56:53.157892   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/functional-237456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:56:53.239682   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/functional-237456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:56:53.401259   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/functional-237456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-697679 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m47.923430702s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 status --alsologtostderr -v 5
E1210 05:56:53.723369   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/functional-237456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:56:54.365391   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/functional-237456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiControlPlane/serial/StartCluster (108.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 kubectl -- rollout status deployment/busybox
E1210 05:56:55.646647   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/functional-237456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-697679 kubectl -- rollout status deployment/busybox: (3.290482114s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
E1210 05:56:58.208365   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/functional-237456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 kubectl -- exec busybox-7b57f96db7-257zj -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 kubectl -- exec busybox-7b57f96db7-dr9mq -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 kubectl -- exec busybox-7b57f96db7-x52tc -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 kubectl -- exec busybox-7b57f96db7-257zj -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 kubectl -- exec busybox-7b57f96db7-dr9mq -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 kubectl -- exec busybox-7b57f96db7-x52tc -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 kubectl -- exec busybox-7b57f96db7-257zj -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 kubectl -- exec busybox-7b57f96db7-dr9mq -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 kubectl -- exec busybox-7b57f96db7-x52tc -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 kubectl -- exec busybox-7b57f96db7-257zj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 kubectl -- exec busybox-7b57f96db7-257zj -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 kubectl -- exec busybox-7b57f96db7-dr9mq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 kubectl -- exec busybox-7b57f96db7-dr9mq -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 kubectl -- exec busybox-7b57f96db7-x52tc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 kubectl -- exec busybox-7b57f96db7-x52tc -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (23.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 node add --alsologtostderr -v 5
E1210 05:57:03.330670   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/functional-237456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:57:13.572879   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/functional-237456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-697679 node add --alsologtostderr -v 5: (22.646324508s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (23.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-697679 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (17.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 status --output json --alsologtostderr -v 5
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 cp testdata/cp-test.txt ha-697679:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 ssh -n ha-697679 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 cp ha-697679:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2948131495/001/cp-test_ha-697679.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 ssh -n ha-697679 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 cp ha-697679:/home/docker/cp-test.txt ha-697679-m02:/home/docker/cp-test_ha-697679_ha-697679-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 ssh -n ha-697679 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 ssh -n ha-697679-m02 "sudo cat /home/docker/cp-test_ha-697679_ha-697679-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 cp ha-697679:/home/docker/cp-test.txt ha-697679-m03:/home/docker/cp-test_ha-697679_ha-697679-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 ssh -n ha-697679 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 ssh -n ha-697679-m03 "sudo cat /home/docker/cp-test_ha-697679_ha-697679-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 cp ha-697679:/home/docker/cp-test.txt ha-697679-m04:/home/docker/cp-test_ha-697679_ha-697679-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 ssh -n ha-697679 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 ssh -n ha-697679-m04 "sudo cat /home/docker/cp-test_ha-697679_ha-697679-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 cp testdata/cp-test.txt ha-697679-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 ssh -n ha-697679-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 cp ha-697679-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2948131495/001/cp-test_ha-697679-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 ssh -n ha-697679-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 cp ha-697679-m02:/home/docker/cp-test.txt ha-697679:/home/docker/cp-test_ha-697679-m02_ha-697679.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 ssh -n ha-697679-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 ssh -n ha-697679 "sudo cat /home/docker/cp-test_ha-697679-m02_ha-697679.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 cp ha-697679-m02:/home/docker/cp-test.txt ha-697679-m03:/home/docker/cp-test_ha-697679-m02_ha-697679-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 ssh -n ha-697679-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 ssh -n ha-697679-m03 "sudo cat /home/docker/cp-test_ha-697679-m02_ha-697679-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 cp ha-697679-m02:/home/docker/cp-test.txt ha-697679-m04:/home/docker/cp-test_ha-697679-m02_ha-697679-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 ssh -n ha-697679-m02 "sudo cat /home/docker/cp-test.txt"
E1210 05:57:34.054250   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/functional-237456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 ssh -n ha-697679-m04 "sudo cat /home/docker/cp-test_ha-697679-m02_ha-697679-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 cp testdata/cp-test.txt ha-697679-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 ssh -n ha-697679-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 cp ha-697679-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2948131495/001/cp-test_ha-697679-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 ssh -n ha-697679-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 cp ha-697679-m03:/home/docker/cp-test.txt ha-697679:/home/docker/cp-test_ha-697679-m03_ha-697679.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 ssh -n ha-697679-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 ssh -n ha-697679 "sudo cat /home/docker/cp-test_ha-697679-m03_ha-697679.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 cp ha-697679-m03:/home/docker/cp-test.txt ha-697679-m02:/home/docker/cp-test_ha-697679-m03_ha-697679-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 ssh -n ha-697679-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 ssh -n ha-697679-m02 "sudo cat /home/docker/cp-test_ha-697679-m03_ha-697679-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 cp ha-697679-m03:/home/docker/cp-test.txt ha-697679-m04:/home/docker/cp-test_ha-697679-m03_ha-697679-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 ssh -n ha-697679-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 ssh -n ha-697679-m04 "sudo cat /home/docker/cp-test_ha-697679-m03_ha-697679-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 cp testdata/cp-test.txt ha-697679-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 ssh -n ha-697679-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 cp ha-697679-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2948131495/001/cp-test_ha-697679-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 ssh -n ha-697679-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 cp ha-697679-m04:/home/docker/cp-test.txt ha-697679:/home/docker/cp-test_ha-697679-m04_ha-697679.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 ssh -n ha-697679-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 ssh -n ha-697679 "sudo cat /home/docker/cp-test_ha-697679-m04_ha-697679.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 cp ha-697679-m04:/home/docker/cp-test.txt ha-697679-m02:/home/docker/cp-test_ha-697679-m04_ha-697679-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 ssh -n ha-697679-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 ssh -n ha-697679-m02 "sudo cat /home/docker/cp-test_ha-697679-m04_ha-697679-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 cp ha-697679-m04:/home/docker/cp-test.txt ha-697679-m03:/home/docker/cp-test_ha-697679-m04_ha-697679-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 ssh -n ha-697679-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 ssh -n ha-697679-m03 "sudo cat /home/docker/cp-test_ha-697679-m04_ha-697679-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (17.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (18.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-697679 node stop m02 --alsologtostderr -v 5: (18.158513495s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-697679 status --alsologtostderr -v 5: exit status 7 (712.654243ms)

                                                
                                                
-- stdout --
	ha-697679
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-697679-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-697679-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-697679-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 05:58:00.865197   91688 out.go:360] Setting OutFile to fd 1 ...
	I1210 05:58:00.865313   91688 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:58:00.865319   91688 out.go:374] Setting ErrFile to fd 2...
	I1210 05:58:00.865323   91688 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:58:00.865546   91688 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8832/.minikube/bin
	I1210 05:58:00.865758   91688 out.go:368] Setting JSON to false
	I1210 05:58:00.865782   91688 mustload.go:66] Loading cluster: ha-697679
	I1210 05:58:00.865880   91688 notify.go:221] Checking for updates...
	I1210 05:58:00.866307   91688 config.go:182] Loaded profile config "ha-697679": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 05:58:00.866329   91688 status.go:174] checking status of ha-697679 ...
	I1210 05:58:00.866918   91688 cli_runner.go:164] Run: docker container inspect ha-697679 --format={{.State.Status}}
	I1210 05:58:00.888483   91688 status.go:371] ha-697679 host status = "Running" (err=<nil>)
	I1210 05:58:00.888513   91688 host.go:66] Checking if "ha-697679" exists ...
	I1210 05:58:00.888753   91688 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-697679
	I1210 05:58:00.909541   91688 host.go:66] Checking if "ha-697679" exists ...
	I1210 05:58:00.909821   91688 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 05:58:00.909879   91688 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-697679
	I1210 05:58:00.929843   91688 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/ha-697679/id_rsa Username:docker}
	I1210 05:58:01.023294   91688 ssh_runner.go:195] Run: systemctl --version
	I1210 05:58:01.030873   91688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 05:58:01.044438   91688 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:58:01.104421   91688 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-10 05:58:01.092989003 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 05:58:01.105056   91688 kubeconfig.go:125] found "ha-697679" server: "https://192.168.49.254:8443"
	I1210 05:58:01.105105   91688 api_server.go:166] Checking apiserver status ...
	I1210 05:58:01.105150   91688 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:58:01.117606   91688 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1253/cgroup
	W1210 05:58:01.126707   91688 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1253/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1210 05:58:01.126754   91688 ssh_runner.go:195] Run: ls
	I1210 05:58:01.130957   91688 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1210 05:58:01.135350   91688 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1210 05:58:01.135376   91688 status.go:463] ha-697679 apiserver status = Running (err=<nil>)
	I1210 05:58:01.135384   91688 status.go:176] ha-697679 status: &{Name:ha-697679 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 05:58:01.135398   91688 status.go:174] checking status of ha-697679-m02 ...
	I1210 05:58:01.135649   91688 cli_runner.go:164] Run: docker container inspect ha-697679-m02 --format={{.State.Status}}
	I1210 05:58:01.154313   91688 status.go:371] ha-697679-m02 host status = "Stopped" (err=<nil>)
	I1210 05:58:01.154334   91688 status.go:384] host is not running, skipping remaining checks
	I1210 05:58:01.154344   91688 status.go:176] ha-697679-m02 status: &{Name:ha-697679-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 05:58:01.154364   91688 status.go:174] checking status of ha-697679-m03 ...
	I1210 05:58:01.154641   91688 cli_runner.go:164] Run: docker container inspect ha-697679-m03 --format={{.State.Status}}
	I1210 05:58:01.174636   91688 status.go:371] ha-697679-m03 host status = "Running" (err=<nil>)
	I1210 05:58:01.174657   91688 host.go:66] Checking if "ha-697679-m03" exists ...
	I1210 05:58:01.174894   91688 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-697679-m03
	I1210 05:58:01.192727   91688 host.go:66] Checking if "ha-697679-m03" exists ...
	I1210 05:58:01.193008   91688 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 05:58:01.193049   91688 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-697679-m03
	I1210 05:58:01.212246   91688 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/ha-697679-m03/id_rsa Username:docker}
	I1210 05:58:01.306404   91688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 05:58:01.320038   91688 kubeconfig.go:125] found "ha-697679" server: "https://192.168.49.254:8443"
	I1210 05:58:01.320072   91688 api_server.go:166] Checking apiserver status ...
	I1210 05:58:01.320115   91688 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 05:58:01.332775   91688 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup
	W1210 05:58:01.341516   91688 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1210 05:58:01.341580   91688 ssh_runner.go:195] Run: ls
	I1210 05:58:01.345389   91688 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1210 05:58:01.349396   91688 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1210 05:58:01.349418   91688 status.go:463] ha-697679-m03 apiserver status = Running (err=<nil>)
	I1210 05:58:01.349426   91688 status.go:176] ha-697679-m03 status: &{Name:ha-697679-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 05:58:01.349438   91688 status.go:174] checking status of ha-697679-m04 ...
	I1210 05:58:01.349711   91688 cli_runner.go:164] Run: docker container inspect ha-697679-m04 --format={{.State.Status}}
	I1210 05:58:01.369794   91688 status.go:371] ha-697679-m04 host status = "Running" (err=<nil>)
	I1210 05:58:01.369817   91688 host.go:66] Checking if "ha-697679-m04" exists ...
	I1210 05:58:01.370057   91688 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-697679-m04
	I1210 05:58:01.388511   91688 host.go:66] Checking if "ha-697679-m04" exists ...
	I1210 05:58:01.388797   91688 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 05:58:01.388832   91688 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-697679-m04
	I1210 05:58:01.407451   91688 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/ha-697679-m04/id_rsa Username:docker}
	I1210 05:58:01.501234   91688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 05:58:01.515243   91688 status.go:176] ha-697679-m04 status: &{Name:ha-697679-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (18.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (8.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-697679 node start m02 --alsologtostderr -v 5: (7.726556426s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (8.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (104.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 stop --alsologtostderr -v 5
E1210 05:58:15.016587   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/functional-237456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-697679 stop --alsologtostderr -v 5: (49.996403917s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 start --wait true --alsologtostderr -v 5
E1210 05:59:28.001191   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/functional-228089/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:59:28.007654   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/functional-228089/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:59:28.019179   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/functional-228089/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:59:28.040644   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/functional-228089/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:59:28.082137   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/functional-228089/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:59:28.163624   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/functional-228089/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:59:28.325120   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/functional-228089/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:59:28.646832   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/functional-228089/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:59:29.288772   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/functional-228089/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:59:30.570171   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/functional-228089/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:59:33.131843   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/functional-228089/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:59:36.938758   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/functional-237456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:59:38.253706   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/functional-228089/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:59:48.495919   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/functional-228089/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-697679 start --wait true --alsologtostderr -v 5: (54.121577329s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (104.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-697679 node delete m03 --alsologtostderr -v 5: (9.805974129s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (44.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 stop --alsologtostderr -v 5
E1210 06:00:08.977823   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/functional-228089/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:00:49.940760   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/functional-228089/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-697679 stop --alsologtostderr -v 5: (43.997684147s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-697679 status --alsologtostderr -v 5: exit status 7 (120.203694ms)

                                                
                                                
-- stdout --
	ha-697679
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-697679-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-697679-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 06:00:51.523804  105841 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:00:51.523905  105841 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:00:51.523910  105841 out.go:374] Setting ErrFile to fd 2...
	I1210 06:00:51.523914  105841 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:00:51.524126  105841 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8832/.minikube/bin
	I1210 06:00:51.524296  105841 out.go:368] Setting JSON to false
	I1210 06:00:51.524319  105841 mustload.go:66] Loading cluster: ha-697679
	I1210 06:00:51.524983  105841 notify.go:221] Checking for updates...
	I1210 06:00:51.525477  105841 config.go:182] Loaded profile config "ha-697679": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 06:00:51.525498  105841 status.go:174] checking status of ha-697679 ...
	I1210 06:00:51.526378  105841 cli_runner.go:164] Run: docker container inspect ha-697679 --format={{.State.Status}}
	I1210 06:00:51.546793  105841 status.go:371] ha-697679 host status = "Stopped" (err=<nil>)
	I1210 06:00:51.546817  105841 status.go:384] host is not running, skipping remaining checks
	I1210 06:00:51.546823  105841 status.go:176] ha-697679 status: &{Name:ha-697679 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 06:00:51.546867  105841 status.go:174] checking status of ha-697679-m02 ...
	I1210 06:00:51.547156  105841 cli_runner.go:164] Run: docker container inspect ha-697679-m02 --format={{.State.Status}}
	I1210 06:00:51.565777  105841 status.go:371] ha-697679-m02 host status = "Stopped" (err=<nil>)
	I1210 06:00:51.565818  105841 status.go:384] host is not running, skipping remaining checks
	I1210 06:00:51.565829  105841 status.go:176] ha-697679-m02 status: &{Name:ha-697679-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 06:00:51.565859  105841 status.go:174] checking status of ha-697679-m04 ...
	I1210 06:00:51.566159  105841 cli_runner.go:164] Run: docker container inspect ha-697679-m04 --format={{.State.Status}}
	I1210 06:00:51.585108  105841 status.go:371] ha-697679-m04 host status = "Stopped" (err=<nil>)
	I1210 06:00:51.585129  105841 status.go:384] host is not running, skipping remaining checks
	I1210 06:00:51.585134  105841 status.go:176] ha-697679-m04 status: &{Name:ha-697679-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (44.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (51.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1210 06:00:56.056257   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-697679 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (50.972834058s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (51.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (69.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 node add --control-plane --alsologtostderr -v 5
E1210 06:01:53.077531   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/functional-237456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:02:11.862664   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/functional-228089/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:02:20.780561   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/functional-237456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-697679 node add --control-plane --alsologtostderr -v 5: (1m8.910196201s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-697679 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (69.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.92s)

                                                
                                    
x
+
TestJSONOutput/start/Command (40.29s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-090860 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-090860 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (40.293678671s)
--- PASS: TestJSONOutput/start/Command (40.29s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.12s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-090860 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-090860 --output=json --user=testUser: (6.1239902s)
--- PASS: TestJSONOutput/stop/Command (6.12s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-251773 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-251773 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (84.720307ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7cababea-ae1d-42b6-bc02-621e16324250","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-251773] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"569e0c98-8dd7-4a84-a5ee-58a223108a25","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22089"}}
	{"specversion":"1.0","id":"44fee148-7871-4433-87a9-39da02f477cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"765933a0-14f2-4720-8235-8326178218f9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22089-8832/kubeconfig"}}
	{"specversion":"1.0","id":"840e3c00-d25f-4c8d-ab98-bc9e48055961","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-8832/.minikube"}}
	{"specversion":"1.0","id":"d96a3572-57c8-4db6-9721-dcc8d637e993","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"1cee33c2-6f58-4dce-95c6-b51a5f27439d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e3523e5f-22e2-4d21-b711-0558d8d5c009","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-251773" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-251773
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (29.24s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-130667 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-130667 --network=: (27.04435168s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-130667" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-130667
E1210 06:04:28.001354   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/functional-228089/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-130667: (2.171198269s)
--- PASS: TestKicCustomNetwork/create_custom_network (29.24s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (25.81s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-525766 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-525766 --network=bridge: (23.739790192s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-525766" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-525766
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-525766: (2.045215046s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (25.81s)

                                                
                                    
x
+
TestKicExistingNetwork (25.63s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1210 06:04:54.357713   12374 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1210 06:04:54.374396   12374 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1210 06:04:54.374483   12374 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1210 06:04:54.374511   12374 cli_runner.go:164] Run: docker network inspect existing-network
W1210 06:04:54.391222   12374 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1210 06:04:54.391251   12374 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1210 06:04:54.391266   12374 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1210 06:04:54.391418   12374 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1210 06:04:54.410568   12374 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-93569dd44e03 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:22:34:6b:89:a0:37} reservation:<nil>}
I1210 06:04:54.410968   12374 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f40260}
I1210 06:04:54.411004   12374 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1210 06:04:54.411045   12374 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1210 06:04:54.459739   12374 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-761692 --network=existing-network
E1210 06:04:55.704551   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/functional-228089/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-761692 --network=existing-network: (23.458450866s)
helpers_test.go:176: Cleaning up "existing-network-761692" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-761692
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-761692: (2.032354549s)
I1210 06:05:19.969262   12374 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (25.63s)

                                                
                                    
x
+
TestKicCustomSubnet (23.15s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-605926 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-605926 --subnet=192.168.60.0/24: (20.962330165s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-605926 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-605926" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-605926
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-605926: (2.170268738s)
--- PASS: TestKicCustomSubnet (23.15s)

                                                
                                    
x
+
TestKicStaticIP (23.76s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-744080 --static-ip=192.168.200.200
E1210 06:05:56.057857   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-744080 --static-ip=192.168.200.200: (21.431079019s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-744080 ip
helpers_test.go:176: Cleaning up "static-ip-744080" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-744080
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-744080: (2.172476136s)
--- PASS: TestKicStaticIP (23.76s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (47.08s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-988325 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-988325 --driver=docker  --container-runtime=crio: (19.844195054s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-991298 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-991298 --driver=docker  --container-runtime=crio: (21.185624498s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-988325
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-991298
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:176: Cleaning up "second-991298" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p second-991298
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p second-991298: (2.371294229s)
helpers_test.go:176: Cleaning up "first-988325" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p first-988325
E1210 06:06:53.077663   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/functional-237456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p first-988325: (2.40973589s)
--- PASS: TestMinikubeProfile (47.08s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.82s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-973176 --memory=3072 --mount-string /tmp/TestMountStartserial628080003/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-973176 --memory=3072 --mount-string /tmp/TestMountStartserial628080003/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.818334745s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.82s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-973176 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.88s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-993414 --memory=3072 --mount-string /tmp/TestMountStartserial628080003/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-993414 --memory=3072 --mount-string /tmp/TestMountStartserial628080003/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.884316916s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.88s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-993414 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.72s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-973176 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-973176 --alsologtostderr -v=5: (1.71766042s)
--- PASS: TestMountStart/serial/DeleteFirst (1.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-993414 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-993414
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-993414: (1.271201756s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.45s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-993414
E1210 06:07:19.123940   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-993414: (6.452965851s)
--- PASS: TestMountStart/serial/RestartStopped (7.45s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-993414 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (65.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-963049 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-963049 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m4.526614597s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963049 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (65.01s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-963049 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-963049 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-963049 -- rollout status deployment/busybox: (2.62885522s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-963049 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-963049 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-963049 -- exec busybox-7b57f96db7-8jd49 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-963049 -- exec busybox-7b57f96db7-v8vr8 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-963049 -- exec busybox-7b57f96db7-8jd49 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-963049 -- exec busybox-7b57f96db7-v8vr8 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-963049 -- exec busybox-7b57f96db7-8jd49 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-963049 -- exec busybox-7b57f96db7-v8vr8 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.08s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-963049 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-963049 -- exec busybox-7b57f96db7-8jd49 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-963049 -- exec busybox-7b57f96db7-8jd49 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-963049 -- exec busybox-7b57f96db7-v8vr8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-963049 -- exec busybox-7b57f96db7-v8vr8 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.76s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (53.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-963049 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-963049 -v=5 --alsologtostderr: (52.924817203s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963049 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (53.57s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-963049 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.67s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963049 status --output json --alsologtostderr
E1210 06:09:28.000352   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/functional-228089/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963049 cp testdata/cp-test.txt multinode-963049:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963049 ssh -n multinode-963049 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963049 cp multinode-963049:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3488434093/001/cp-test_multinode-963049.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963049 ssh -n multinode-963049 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963049 cp multinode-963049:/home/docker/cp-test.txt multinode-963049-m02:/home/docker/cp-test_multinode-963049_multinode-963049-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963049 ssh -n multinode-963049 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963049 ssh -n multinode-963049-m02 "sudo cat /home/docker/cp-test_multinode-963049_multinode-963049-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963049 cp multinode-963049:/home/docker/cp-test.txt multinode-963049-m03:/home/docker/cp-test_multinode-963049_multinode-963049-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963049 ssh -n multinode-963049 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963049 ssh -n multinode-963049-m03 "sudo cat /home/docker/cp-test_multinode-963049_multinode-963049-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963049 cp testdata/cp-test.txt multinode-963049-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963049 ssh -n multinode-963049-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963049 cp multinode-963049-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3488434093/001/cp-test_multinode-963049-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963049 ssh -n multinode-963049-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963049 cp multinode-963049-m02:/home/docker/cp-test.txt multinode-963049:/home/docker/cp-test_multinode-963049-m02_multinode-963049.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963049 ssh -n multinode-963049-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963049 ssh -n multinode-963049 "sudo cat /home/docker/cp-test_multinode-963049-m02_multinode-963049.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963049 cp multinode-963049-m02:/home/docker/cp-test.txt multinode-963049-m03:/home/docker/cp-test_multinode-963049-m02_multinode-963049-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963049 ssh -n multinode-963049-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963049 ssh -n multinode-963049-m03 "sudo cat /home/docker/cp-test_multinode-963049-m02_multinode-963049-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963049 cp testdata/cp-test.txt multinode-963049-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963049 ssh -n multinode-963049-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963049 cp multinode-963049-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3488434093/001/cp-test_multinode-963049-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963049 ssh -n multinode-963049-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963049 cp multinode-963049-m03:/home/docker/cp-test.txt multinode-963049:/home/docker/cp-test_multinode-963049-m03_multinode-963049.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963049 ssh -n multinode-963049-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963049 ssh -n multinode-963049 "sudo cat /home/docker/cp-test_multinode-963049-m03_multinode-963049.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963049 cp multinode-963049-m03:/home/docker/cp-test.txt multinode-963049-m02:/home/docker/cp-test_multinode-963049-m03_multinode-963049-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963049 ssh -n multinode-963049-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963049 ssh -n multinode-963049-m02 "sudo cat /home/docker/cp-test_multinode-963049-m03_multinode-963049-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.87s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963049 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-963049 node stop m03: (1.269645976s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963049 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-963049 status: exit status 7 (502.526362ms)

                                                
                                                
-- stdout --
	multinode-963049
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-963049-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-963049-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963049 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-963049 status --alsologtostderr: exit status 7 (522.656382ms)

                                                
                                                
-- stdout --
	multinode-963049
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-963049-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-963049-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 06:09:39.057171  165586 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:09:39.057278  165586 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:09:39.057284  165586 out.go:374] Setting ErrFile to fd 2...
	I1210 06:09:39.057291  165586 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:09:39.057522  165586 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8832/.minikube/bin
	I1210 06:09:39.057702  165586 out.go:368] Setting JSON to false
	I1210 06:09:39.057726  165586 mustload.go:66] Loading cluster: multinode-963049
	I1210 06:09:39.057837  165586 notify.go:221] Checking for updates...
	I1210 06:09:39.058046  165586 config.go:182] Loaded profile config "multinode-963049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 06:09:39.058063  165586 status.go:174] checking status of multinode-963049 ...
	I1210 06:09:39.058512  165586 cli_runner.go:164] Run: docker container inspect multinode-963049 --format={{.State.Status}}
	I1210 06:09:39.077902  165586 status.go:371] multinode-963049 host status = "Running" (err=<nil>)
	I1210 06:09:39.077924  165586 host.go:66] Checking if "multinode-963049" exists ...
	I1210 06:09:39.078202  165586 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-963049
	I1210 06:09:39.097747  165586 host.go:66] Checking if "multinode-963049" exists ...
	I1210 06:09:39.098017  165586 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:09:39.098056  165586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-963049
	I1210 06:09:39.117096  165586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/multinode-963049/id_rsa Username:docker}
	I1210 06:09:39.210983  165586 ssh_runner.go:195] Run: systemctl --version
	I1210 06:09:39.217611  165586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:09:39.230812  165586 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:09:39.290037  165586 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-10 06:09:39.279790613 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:09:39.290599  165586 kubeconfig.go:125] found "multinode-963049" server: "https://192.168.67.2:8443"
	I1210 06:09:39.290633  165586 api_server.go:166] Checking apiserver status ...
	I1210 06:09:39.290665  165586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:09:39.302365  165586 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1246/cgroup
	W1210 06:09:39.310959  165586 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1246/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:09:39.311007  165586 ssh_runner.go:195] Run: ls
	I1210 06:09:39.314882  165586 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1210 06:09:39.320076  165586 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1210 06:09:39.320105  165586 status.go:463] multinode-963049 apiserver status = Running (err=<nil>)
	I1210 06:09:39.320115  165586 status.go:176] multinode-963049 status: &{Name:multinode-963049 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 06:09:39.320130  165586 status.go:174] checking status of multinode-963049-m02 ...
	I1210 06:09:39.320407  165586 cli_runner.go:164] Run: docker container inspect multinode-963049-m02 --format={{.State.Status}}
	I1210 06:09:39.339743  165586 status.go:371] multinode-963049-m02 host status = "Running" (err=<nil>)
	I1210 06:09:39.339765  165586 host.go:66] Checking if "multinode-963049-m02" exists ...
	I1210 06:09:39.340134  165586 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-963049-m02
	I1210 06:09:39.358533  165586 host.go:66] Checking if "multinode-963049-m02" exists ...
	I1210 06:09:39.358783  165586 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:09:39.358825  165586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-963049-m02
	I1210 06:09:39.376988  165586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/22089-8832/.minikube/machines/multinode-963049-m02/id_rsa Username:docker}
	I1210 06:09:39.468838  165586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:09:39.482065  165586 status.go:176] multinode-963049-m02 status: &{Name:multinode-963049-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1210 06:09:39.482109  165586 status.go:174] checking status of multinode-963049-m03 ...
	I1210 06:09:39.482397  165586 cli_runner.go:164] Run: docker container inspect multinode-963049-m03 --format={{.State.Status}}
	I1210 06:09:39.518304  165586 status.go:371] multinode-963049-m03 host status = "Stopped" (err=<nil>)
	I1210 06:09:39.518327  165586 status.go:384] host is not running, skipping remaining checks
	I1210 06:09:39.518335  165586 status.go:176] multinode-963049-m03 status: &{Name:multinode-963049-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.30s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963049 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-963049 node start m03 -v=5 --alsologtostderr: (6.513251919s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963049 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.22s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (82.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-963049
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-963049
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-963049: (31.473804186s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-963049 --wait=true -v=5 --alsologtostderr
E1210 06:10:56.056825   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-963049 --wait=true -v=5 --alsologtostderr: (51.12980418s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-963049
--- PASS: TestMultiNode/serial/RestartKeepsNodes (82.73s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963049 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-963049 node delete m03: (4.671816312s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963049 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.27s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (28.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963049 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-963049 stop: (28.529223521s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963049 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-963049 status: exit status 7 (99.515231ms)

                                                
                                                
-- stdout --
	multinode-963049
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-963049-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963049 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-963049 status --alsologtostderr: exit status 7 (102.988827ms)

                                                
                                                
-- stdout --
	multinode-963049
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-963049-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 06:11:43.430107  175388 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:11:43.430552  175388 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:11:43.430562  175388 out.go:374] Setting ErrFile to fd 2...
	I1210 06:11:43.430566  175388 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:11:43.430756  175388 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8832/.minikube/bin
	I1210 06:11:43.430917  175388 out.go:368] Setting JSON to false
	I1210 06:11:43.430939  175388 mustload.go:66] Loading cluster: multinode-963049
	I1210 06:11:43.431108  175388 notify.go:221] Checking for updates...
	I1210 06:11:43.431286  175388 config.go:182] Loaded profile config "multinode-963049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 06:11:43.431303  175388 status.go:174] checking status of multinode-963049 ...
	I1210 06:11:43.431758  175388 cli_runner.go:164] Run: docker container inspect multinode-963049 --format={{.State.Status}}
	I1210 06:11:43.454701  175388 status.go:371] multinode-963049 host status = "Stopped" (err=<nil>)
	I1210 06:11:43.454725  175388 status.go:384] host is not running, skipping remaining checks
	I1210 06:11:43.454732  175388 status.go:176] multinode-963049 status: &{Name:multinode-963049 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 06:11:43.454758  175388 status.go:174] checking status of multinode-963049-m02 ...
	I1210 06:11:43.454997  175388 cli_runner.go:164] Run: docker container inspect multinode-963049-m02 --format={{.State.Status}}
	I1210 06:11:43.473973  175388 status.go:371] multinode-963049-m02 host status = "Stopped" (err=<nil>)
	I1210 06:11:43.474014  175388 status.go:384] host is not running, skipping remaining checks
	I1210 06:11:43.474028  175388 status.go:176] multinode-963049-m02 status: &{Name:multinode-963049-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (28.73s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (44.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-963049 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1210 06:11:53.076815   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/functional-237456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-963049 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (43.80515878s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-963049 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (44.41s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (22.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-963049
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-963049-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-963049-m02 --driver=docker  --container-runtime=crio: exit status 14 (79.716544ms)

                                                
                                                
-- stdout --
	* [multinode-963049-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22089
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22089-8832/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-8832/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-963049-m02' is duplicated with machine name 'multinode-963049-m02' in profile 'multinode-963049'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-963049-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-963049-m03 --driver=docker  --container-runtime=crio: (19.315293804s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-963049
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-963049: exit status 80 (292.121947ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-963049 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-963049-m03 already exists in multinode-963049-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-963049-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-963049-m03: (2.390720048s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (22.14s)

                                                
                                    
x
+
TestPreload (102.88s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-375851 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio
E1210 06:13:16.142945   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/functional-237456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:41: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-375851 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio: (49.115610967s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-375851 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 -p test-preload-375851 image pull gcr.io/k8s-minikube/busybox: (1.560009378s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-375851
preload_test.go:55: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-375851: (6.265627172s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-375851 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1210 06:14:28.001866   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/functional-228089/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-375851 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (43.266284685s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-375851 image list
helpers_test.go:176: Cleaning up "test-preload-375851" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-375851
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-375851: (2.437754217s)
--- PASS: TestPreload (102.88s)

                                                
                                    
x
+
TestScheduledStopUnix (95.27s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-576048 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-576048 --memory=3072 --driver=docker  --container-runtime=crio: (18.946606552s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-576048 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1210 06:14:56.160506  192440 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:14:56.160796  192440 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:14:56.160805  192440 out.go:374] Setting ErrFile to fd 2...
	I1210 06:14:56.160811  192440 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:14:56.161104  192440 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8832/.minikube/bin
	I1210 06:14:56.161389  192440 out.go:368] Setting JSON to false
	I1210 06:14:56.161497  192440 mustload.go:66] Loading cluster: scheduled-stop-576048
	I1210 06:14:56.161830  192440 config.go:182] Loaded profile config "scheduled-stop-576048": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 06:14:56.161898  192440 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/scheduled-stop-576048/config.json ...
	I1210 06:14:56.162085  192440 mustload.go:66] Loading cluster: scheduled-stop-576048
	I1210 06:14:56.162171  192440 config.go:182] Loaded profile config "scheduled-stop-576048": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-576048 -n scheduled-stop-576048
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-576048 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1210 06:14:56.555137  192593 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:14:56.555254  192593 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:14:56.555259  192593 out.go:374] Setting ErrFile to fd 2...
	I1210 06:14:56.555264  192593 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:14:56.555545  192593 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8832/.minikube/bin
	I1210 06:14:56.555790  192593 out.go:368] Setting JSON to false
	I1210 06:14:56.555994  192593 daemonize_unix.go:73] killing process 192476 as it is an old scheduled stop
	I1210 06:14:56.556110  192593 mustload.go:66] Loading cluster: scheduled-stop-576048
	I1210 06:14:56.556440  192593 config.go:182] Loaded profile config "scheduled-stop-576048": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 06:14:56.556555  192593 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/scheduled-stop-576048/config.json ...
	I1210 06:14:56.556749  192593 mustload.go:66] Loading cluster: scheduled-stop-576048
	I1210 06:14:56.556879  192593 config.go:182] Loaded profile config "scheduled-stop-576048": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1210 06:14:56.562934   12374 retry.go:31] will retry after 91.012µs: open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/scheduled-stop-576048/pid: no such file or directory
I1210 06:14:56.564086   12374 retry.go:31] will retry after 196.64µs: open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/scheduled-stop-576048/pid: no such file or directory
I1210 06:14:56.565225   12374 retry.go:31] will retry after 290.289µs: open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/scheduled-stop-576048/pid: no such file or directory
I1210 06:14:56.566355   12374 retry.go:31] will retry after 459.412µs: open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/scheduled-stop-576048/pid: no such file or directory
I1210 06:14:56.567490   12374 retry.go:31] will retry after 641.444µs: open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/scheduled-stop-576048/pid: no such file or directory
I1210 06:14:56.568627   12374 retry.go:31] will retry after 980.102µs: open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/scheduled-stop-576048/pid: no such file or directory
I1210 06:14:56.569753   12374 retry.go:31] will retry after 1.393349ms: open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/scheduled-stop-576048/pid: no such file or directory
I1210 06:14:56.571947   12374 retry.go:31] will retry after 1.238044ms: open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/scheduled-stop-576048/pid: no such file or directory
I1210 06:14:56.574154   12374 retry.go:31] will retry after 3.509209ms: open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/scheduled-stop-576048/pid: no such file or directory
I1210 06:14:56.578402   12374 retry.go:31] will retry after 2.546986ms: open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/scheduled-stop-576048/pid: no such file or directory
I1210 06:14:56.581629   12374 retry.go:31] will retry after 7.607549ms: open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/scheduled-stop-576048/pid: no such file or directory
I1210 06:14:56.589896   12374 retry.go:31] will retry after 10.780844ms: open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/scheduled-stop-576048/pid: no such file or directory
I1210 06:14:56.601164   12374 retry.go:31] will retry after 8.678706ms: open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/scheduled-stop-576048/pid: no such file or directory
I1210 06:14:56.610433   12374 retry.go:31] will retry after 15.883488ms: open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/scheduled-stop-576048/pid: no such file or directory
I1210 06:14:56.626690   12374 retry.go:31] will retry after 37.60737ms: open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/scheduled-stop-576048/pid: no such file or directory
I1210 06:14:56.664507   12374 retry.go:31] will retry after 64.573702ms: open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/scheduled-stop-576048/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-576048 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-576048 -n scheduled-stop-576048
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-576048
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-576048 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1210 06:15:22.498530  193150 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:15:22.498771  193150 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:15:22.498779  193150 out.go:374] Setting ErrFile to fd 2...
	I1210 06:15:22.498784  193150 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:15:22.499052  193150 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8832/.minikube/bin
	I1210 06:15:22.499301  193150 out.go:368] Setting JSON to false
	I1210 06:15:22.499378  193150 mustload.go:66] Loading cluster: scheduled-stop-576048
	I1210 06:15:22.499710  193150 config.go:182] Loaded profile config "scheduled-stop-576048": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 06:15:22.499772  193150 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/scheduled-stop-576048/config.json ...
	I1210 06:15:22.499954  193150 mustload.go:66] Loading cluster: scheduled-stop-576048
	I1210 06:15:22.500051  193150 config.go:182] Loaded profile config "scheduled-stop-576048": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
E1210 06:15:51.068758   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/functional-228089/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:15:56.058406   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-576048
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-576048: exit status 7 (83.008971ms)

                                                
                                                
-- stdout --
	scheduled-stop-576048
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-576048 -n scheduled-stop-576048
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-576048 -n scheduled-stop-576048: exit status 7 (83.673075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-576048" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-576048
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-576048: (4.749127818s)
--- PASS: TestScheduledStopUnix (95.27s)

                                                
                                    
x
+
TestInsufficientStorage (11.97s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-007144 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-007144 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (9.468390144s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a578c7d5-2670-4374-8163-e13acaaeaeea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-007144] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0b58202e-61e0-413e-b680-87bca8bb1127","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22089"}}
	{"specversion":"1.0","id":"6efaf467-35a1-4f80-972a-bafb1ce1aa4f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"61ebf0d6-0300-4785-9652-21bffd02e039","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22089-8832/kubeconfig"}}
	{"specversion":"1.0","id":"32e23572-a7fb-425f-8686-53c8547968cf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-8832/.minikube"}}
	{"specversion":"1.0","id":"42d11fef-28e2-4ac5-95c0-b487368bf346","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"8c6fa52a-072a-4c17-b1a3-5660e7ca08d1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e0333f49-4cf8-4fe9-b384-252f42185403","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"dc19159f-671d-446d-9b40-f19f91606c3d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"bd7b193c-30d9-4be7-a8a3-4780768d6591","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"010e8f0c-8dad-49ed-9655-754d049667a5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"17d18bd9-61fa-4c54-b13c-9ec5b3ae9152","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-007144\" primary control-plane node in \"insufficient-storage-007144\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"b2816508-f9f7-479b-bce0-06b567d400ec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1765319469-22089 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"a0d12e2b-55d5-4648-ace6-da14f6a06006","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"c685160c-96b0-4bfd-bc8e-4e1530d3c817","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-007144 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-007144 --output=json --layout=cluster: exit status 7 (296.003967ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-007144","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-007144","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 06:16:22.172364  195671 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-007144" does not appear in /home/jenkins/minikube-integration/22089-8832/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-007144 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-007144 --output=json --layout=cluster: exit status 7 (289.361863ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-007144","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-007144","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 06:16:22.461935  195784 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-007144" does not appear in /home/jenkins/minikube-integration/22089-8832/kubeconfig
	E1210 06:16:22.472519  195784 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/insufficient-storage-007144/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-007144" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-007144
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-007144: (1.919880061s)
--- PASS: TestInsufficientStorage (11.97s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (64.55s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.378434503 start -p running-upgrade-538113 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.378434503 start -p running-upgrade-538113 --memory=3072 --vm-driver=docker  --container-runtime=crio: (39.588634697s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-538113 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-538113 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (21.63493156s)
helpers_test.go:176: Cleaning up "running-upgrade-538113" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-538113
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-538113: (2.63057512s)
--- PASS: TestRunningBinaryUpgrade (64.55s)

                                                
                                    
x
+
TestKubernetesUpgrade (161s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-800617 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-800617 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (33.352435794s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-800617
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-800617: (12.346730855s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-800617 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-800617 status --format={{.Host}}: exit status 7 (90.759811ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-800617 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-800617 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m47.149194694s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-800617 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-800617 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-800617 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (96.977649ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-800617] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22089
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22089-8832/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-8832/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0-beta.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-800617
	    minikube start -p kubernetes-upgrade-800617 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8006172 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-800617 --kubernetes-version=v1.35.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-800617 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-800617 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (5.310729291s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-800617" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-800617
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-800617: (2.589921253s)
--- PASS: TestKubernetesUpgrade (161.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (97.89s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.2153893420 start -p missing-upgrade-490462 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.2153893420 start -p missing-upgrade-490462 --memory=3072 --driver=docker  --container-runtime=crio: (51.250829053s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-490462
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-490462: (1.77813147s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-490462
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-490462 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-490462 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (41.869962377s)
helpers_test.go:176: Cleaning up "missing-upgrade-490462" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-490462
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-490462: (2.298612249s)
--- PASS: TestMissingContainerUpgrade (97.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-844491 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-844491 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (108.554608ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-844491] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22089
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22089-8832/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-8832/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (31.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-844491 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1210 06:16:53.077684   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/functional-237456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-844491 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (31.320639982s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-844491 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (31.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (16.6s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-844491 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-844491 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (14.220912489s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-844491 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-844491 status -o json: exit status 2 (347.978289ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-844491","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-844491
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-844491: (2.028263667s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (16.60s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.97s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-844491 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-844491 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (9.972786955s)
--- PASS: TestNoKubernetes/serial/Start (9.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22089-8832/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-844491 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-844491 "sudo systemctl is-active --quiet service kubelet": exit status 1 (332.774156ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (32.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:194: (dbg) Done: out/minikube-linux-amd64 profile list: (15.270286249s)
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:204: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (16.955326282s)
--- PASS: TestNoKubernetes/serial/ProfileList (32.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-201263 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-201263 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (164.437147ms)

                                                
                                                
-- stdout --
	* [false-201263] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22089
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22089-8832/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-8832/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 06:17:48.052785  217164 out.go:360] Setting OutFile to fd 1 ...
	I1210 06:17:48.053075  217164 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:17:48.053086  217164 out.go:374] Setting ErrFile to fd 2...
	I1210 06:17:48.053096  217164 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:17:48.053340  217164 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-8832/.minikube/bin
	I1210 06:17:48.053865  217164 out.go:368] Setting JSON to false
	I1210 06:17:48.054975  217164 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3619,"bootTime":1765343849,"procs":274,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 06:17:48.055032  217164 start.go:143] virtualization: kvm guest
	I1210 06:17:48.057244  217164 out.go:179] * [false-201263] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1210 06:17:48.058608  217164 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 06:17:48.058651  217164 notify.go:221] Checking for updates...
	I1210 06:17:48.061185  217164 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:17:48.062327  217164 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-8832/kubeconfig
	I1210 06:17:48.063585  217164 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-8832/.minikube
	I1210 06:17:48.064689  217164 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 06:17:48.065861  217164 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:17:48.067417  217164 config.go:182] Loaded profile config "NoKubernetes-844491": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1210 06:17:48.067553  217164 config.go:182] Loaded profile config "cert-expiration-936135": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 06:17:48.067659  217164 config.go:182] Loaded profile config "kubernetes-upgrade-800617": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 06:17:48.067817  217164 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:17:48.092325  217164 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1210 06:17:48.092507  217164 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:17:48.150547  217164 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-10 06:17:48.139347587 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.3] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 06:17:48.150645  217164 docker.go:319] overlay module found
	I1210 06:17:48.152433  217164 out.go:179] * Using the docker driver based on user configuration
	I1210 06:17:48.153605  217164 start.go:309] selected driver: docker
	I1210 06:17:48.153622  217164 start.go:927] validating driver "docker" against <nil>
	I1210 06:17:48.153635  217164 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:17:48.155308  217164 out.go:203] 
	W1210 06:17:48.156380  217164 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1210 06:17:48.157437  217164 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-201263 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-201263

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-201263

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-201263

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-201263

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-201263

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-201263

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-201263

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-201263

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-201263

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-201263

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-201263"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-201263"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-201263"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-201263

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-201263"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-201263"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-201263" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-201263" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-201263" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-201263" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-201263" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-201263" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-201263" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-201263" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-201263"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-201263"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-201263"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-201263"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-201263"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-201263" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-201263" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-201263" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-201263"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-201263"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-201263"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-201263"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-201263"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22089-8832/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 10 Dec 2025 06:16:56 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: cert-expiration-936135
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22089-8832/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 10 Dec 2025 06:17:20 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-800617
contexts:
- context:
cluster: cert-expiration-936135
extensions:
- extension:
last-update: Wed, 10 Dec 2025 06:16:56 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-936135
name: cert-expiration-936135
- context:
cluster: kubernetes-upgrade-800617
user: kubernetes-upgrade-800617
name: kubernetes-upgrade-800617
current-context: ""
kind: Config
users:
- name: cert-expiration-936135
user:
client-certificate: /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/cert-expiration-936135/client.crt
client-key: /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/cert-expiration-936135/client.key
- name: kubernetes-upgrade-800617
user:
client-certificate: /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/kubernetes-upgrade-800617/client.crt
client-key: /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/kubernetes-upgrade-800617/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-201263

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-201263"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-201263"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-201263"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-201263"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-201263"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-201263"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-201263"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-201263"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-201263"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-201263"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-201263"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-201263"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-201263"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-201263"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-201263"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-201263"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-201263"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-201263"

                                                
                                                
----------------------- debugLogs end: false-201263 [took: 3.17875737s] --------------------------------
helpers_test.go:176: Cleaning up "false-201263" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p false-201263
--- PASS: TestNetworkPlugins/group/false (3.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-844491
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-844491: (1.299183511s)
--- PASS: TestNoKubernetes/serial/Stop (1.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-844491 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-844491 --driver=docker  --container-runtime=crio: (6.364228281s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-844491 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-844491 "sudo systemctl is-active --quiet service kubelet": exit status 1 (281.683487ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.55s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.55s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (48.82s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.1731799398 start -p stopped-upgrade-709856 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.1731799398 start -p stopped-upgrade-709856 --memory=3072 --vm-driver=docker  --container-runtime=crio: (21.995677591s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.1731799398 -p stopped-upgrade-709856 stop
E1210 06:19:28.000950   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/functional-228089/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.1731799398 -p stopped-upgrade-709856 stop: (1.949719493s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-709856 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-709856 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (24.87497956s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (48.82s)

                                                
                                    
x
+
TestPause/serial/Start (70.33s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-203121 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-203121 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m10.325950194s)
--- PASS: TestPause/serial/Start (70.33s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.73s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-709856
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-709856: (1.725081095s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (42.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-201263 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-201263 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (42.641096505s)
--- PASS: TestNetworkPlugins/group/auto/Start (42.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (49.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-201263 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-201263 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (49.2437972s)
--- PASS: TestNetworkPlugins/group/calico/Start (49.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (51.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-201263 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-201263 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (51.043475568s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (51.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-201263 "pgrep -a kubelet"
I1210 06:20:42.335402   12374 config.go:182] Loaded profile config "auto-201263": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-201263 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-n8fh2" [c7cc003e-8099-4d08-8a3a-7b053783c0a4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-n8fh2" [c7cc003e-8099-4d08-8a3a-7b053783c0a4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.00508007s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.22s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.48s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-203121 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-203121 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (6.474333984s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-sndmq" [0a9fda50-386f-4548-b300-0f2b61dfb24a] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:353: "calico-node-sndmq" [0a9fda50-386f-4548-b300-0f2b61dfb24a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003822776s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-201263 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-201263 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-201263 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-201263 "pgrep -a kubelet"
I1210 06:20:55.552779   12374 config.go:182] Loaded profile config "calico-201263": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-201263 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-mvx6n" [48e0f392-09c8-47cd-9e59-9ccd57391a68] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1210 06:20:56.056838   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-mvx6n" [48e0f392-09c8-47cd-9e59-9ccd57391a68] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.004561732s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-201263 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-201263 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-fmgmp" [5206e57e-e33e-44bc-9854-f2b68ec95915] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-fmgmp" [5206e57e-e33e-44bc-9854-f2b68ec95915] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.05850163s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (75.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-201263 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-201263 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m15.555803315s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (75.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-201263 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-201263 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-201263 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-201263 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-201263 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-201263 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (46.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-201263 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-201263 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (46.386643142s)
--- PASS: TestNetworkPlugins/group/flannel/Start (46.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (64.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-201263 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-201263 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m4.615315968s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (64.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (70.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-201263 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E1210 06:21:53.077817   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/functional-237456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-201263 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m10.312963649s)
--- PASS: TestNetworkPlugins/group/bridge/Start (70.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-5tffx" [4591ec09-59b7-4b69-a7a1-9b9477fd88f9] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004573148s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-201263 "pgrep -a kubelet"
I1210 06:22:05.766349   12374 config.go:182] Loaded profile config "flannel-201263": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (8.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-201263 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-8x2xq" [2711ed62-d6ff-4483-8014-eb0f86f1ac94] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-8x2xq" [2711ed62-d6ff-4483-8014-eb0f86f1ac94] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 8.004260666s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (8.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-201263 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-201263 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-201263 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-cgz4g" [18fac054-9823-47aa-92a8-77a2ecfc315b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004784841s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-201263 "pgrep -a kubelet"
I1210 06:22:25.667614   12374 config.go:182] Loaded profile config "kindnet-201263": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (8.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-201263 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-562kz" [348ab29f-3bf9-4a0b-9f44-fd4c15a0f7f6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-562kz" [348ab29f-3bf9-4a0b-9f44-fd4c15a0f7f6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 8.005345046s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (8.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-201263 "pgrep -a kubelet"
I1210 06:22:33.613713   12374 config.go:182] Loaded profile config "enable-default-cni-201263": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-201263 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-2xnqj" [d4d03c7e-6444-4da8-96c0-bbf8bb345c02] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-2xnqj" [d4d03c7e-6444-4da8-96c0-bbf8bb345c02] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.004029454s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-201263 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-201263 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-201263 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (52.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-424086 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-424086 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (52.064231995s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (52.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-201263 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-201263 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-201263 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-201263 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-201263 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-6qs8p" [efef041b-87b3-40e2-98c3-60499dfb342b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-6qs8p" [efef041b-87b3-40e2-98c3-60499dfb342b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.003941133s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-201263 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-201263 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-201263 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (52.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-713838 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-713838 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (52.875960284s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (52.88s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (46.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-133470 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-133470 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (46.423755341s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (46.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (44.48s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-643991 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-643991 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (44.478993001s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (44.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-424086 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [a4d86c19-5b30-46da-bcf1-505d9e0c52a3] Pending
helpers_test.go:353: "busybox" [a4d86c19-5b30-46da-bcf1-505d9e0c52a3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [a4d86c19-5b30-46da-bcf1-505d9e0c52a3] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.00407757s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-424086 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (16.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-424086 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-424086 --alsologtostderr -v=3: (16.223472054s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (16.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-713838 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [da64c2c1-2faf-4ff5-9b06-95db44ebc605] Pending
helpers_test.go:353: "busybox" [da64c2c1-2faf-4ff5-9b06-95db44ebc605] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [da64c2c1-2faf-4ff5-9b06-95db44ebc605] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.003931655s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-713838 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (7.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-133470 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [711c9c67-e967-4f68-9f76-d8694d86d75f] Pending
helpers_test.go:353: "busybox" [711c9c67-e967-4f68-9f76-d8694d86d75f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [711c9c67-e967-4f68-9f76-d8694d86d75f] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 7.003501253s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-133470 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (7.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-424086 -n old-k8s-version-424086
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-424086 -n old-k8s-version-424086: exit status 7 (81.697444ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-424086 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (43.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-424086 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-424086 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (43.023530487s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-424086 -n old-k8s-version-424086
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (43.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (18.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-713838 --alsologtostderr -v=3
E1210 06:23:59.125906   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/addons-028052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-713838 --alsologtostderr -v=3: (18.22236115s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (18.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (18.58s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-133470 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-133470 --alsologtostderr -v=3: (18.579726423s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (18.58s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-643991 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [0d90f3d0-1378-4217-b9cc-2116a1d1dbbb] Pending
helpers_test.go:353: "busybox" [0d90f3d0-1378-4217-b9cc-2116a1d1dbbb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [0d90f3d0-1378-4217-b9cc-2116a1d1dbbb] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.00320242s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-643991 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (18.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-643991 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-643991 --alsologtostderr -v=3: (18.390205823s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (18.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-713838 -n no-preload-713838
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-713838 -n no-preload-713838: exit status 7 (102.516124ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-713838 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (44.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-713838 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-713838 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (44.514078441s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-713838 -n no-preload-713838
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (44.88s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-133470 -n embed-certs-133470
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-133470 -n embed-certs-133470: exit status 7 (106.595668ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-133470 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (50.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-133470 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
E1210 06:24:28.000262   12374 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/functional-228089/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-133470 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (49.960416619s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-133470 -n embed-certs-133470
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (50.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-643991 -n default-k8s-diff-port-643991
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-643991 -n default-k8s-diff-port-643991: exit status 7 (118.51204ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-643991 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (52.83s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-643991 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-643991 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (52.467331628s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-643991 -n default-k8s-diff-port-643991
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (52.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-gwx7s" [3e9c8ba6-46d4-4305-9a87-ffc54ec95c34] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004020679s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-gwx7s" [3e9c8ba6-46d4-4305-9a87-ffc54ec95c34] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004736797s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-424086 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-424086 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (25.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-126107 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-126107 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (25.90743654s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (25.91s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-5pf6p" [df801254-e04d-483d-9c00-cdeb0ab6f850] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00368092s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-5pf6p" [df801254-e04d-483d-9c00-cdeb0ab6f850] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003628351s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-713838 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-tvh5q" [91ce86a6-7d58-4648-9399-d3b07c7e250c] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003692377s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-713838 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-tvh5q" [91ce86a6-7d58-4648-9399-d3b07c7e250c] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00398961s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-133470 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-133470 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-llkbc" [390a0b83-fd9c-42b8-8732-362bbb3a7e9a] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002954397s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.47s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-126107 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-126107 --alsologtostderr -v=3: (2.471386411s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.47s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-llkbc" [390a0b83-fd9c-42b8-8732-362bbb3a7e9a] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004049433s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-643991 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-126107 -n newest-cni-126107
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-126107 -n newest-cni-126107: exit status 7 (81.454267ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-126107 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (10.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-126107 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-126107 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (10.021098469s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-126107 -n newest-cni-126107
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (10.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-643991 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-126107 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    

Test skip (34/415)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
60 TestDockerFlags 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
149 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
150 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
151 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv 0
212 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0
262 TestGvisorAddon 0
284 TestImageBuild 0
285 TestISOImage 0
349 TestChangeNoneUser 0
352 TestScheduledStopWindows 0
354 TestSkaffold 0
371 TestNetworkPlugins/group/kubenet 3.37
379 TestNetworkPlugins/group/cilium 3.71
393 TestStartStop/group/disable-driver-mounts 0.2
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:765: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-201263 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-201263

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-201263

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-201263

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-201263

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-201263

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-201263

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-201263

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-201263

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-201263

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-201263

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-201263"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-201263"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-201263"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-201263

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-201263"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-201263"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-201263" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-201263" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-201263" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-201263" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-201263" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-201263" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-201263" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-201263" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-201263"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-201263"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-201263"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-201263"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-201263"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-201263" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-201263" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-201263" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-201263"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-201263"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-201263"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-201263"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-201263"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22089-8832/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 10 Dec 2025 06:16:56 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: cert-expiration-936135
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22089-8832/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 10 Dec 2025 06:17:20 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-800617
contexts:
- context:
cluster: cert-expiration-936135
extensions:
- extension:
last-update: Wed, 10 Dec 2025 06:16:56 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-936135
name: cert-expiration-936135
- context:
cluster: kubernetes-upgrade-800617
user: kubernetes-upgrade-800617
name: kubernetes-upgrade-800617
current-context: ""
kind: Config
users:
- name: cert-expiration-936135
user:
client-certificate: /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/cert-expiration-936135/client.crt
client-key: /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/cert-expiration-936135/client.key
- name: kubernetes-upgrade-800617
user:
client-certificate: /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/kubernetes-upgrade-800617/client.crt
client-key: /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/kubernetes-upgrade-800617/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-201263

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-201263"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-201263"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-201263"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-201263"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-201263"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-201263"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-201263"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-201263"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-201263"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-201263"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-201263"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-201263"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-201263"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-201263"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-201263"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-201263"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-201263"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-201263"

                                                
                                                
----------------------- debugLogs end: kubenet-201263 [took: 3.201566117s] --------------------------------
helpers_test.go:176: Cleaning up "kubenet-201263" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-201263
--- SKIP: TestNetworkPlugins/group/kubenet (3.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-201263 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-201263

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-201263

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-201263

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-201263

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-201263

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-201263

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-201263

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-201263

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-201263

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-201263

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-201263"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-201263"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-201263"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-201263

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-201263"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-201263"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-201263" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-201263" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-201263" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-201263" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-201263" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-201263" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-201263" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-201263" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-201263"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-201263"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-201263"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-201263"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-201263"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-201263

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-201263

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-201263" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-201263" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-201263

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-201263

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-201263" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-201263" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-201263" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-201263" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-201263" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-201263"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-201263"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-201263"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-201263"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-201263"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22089-8832/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 10 Dec 2025 06:16:56 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: cert-expiration-936135
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22089-8832/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 10 Dec 2025 06:17:20 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-800617
contexts:
- context:
cluster: cert-expiration-936135
extensions:
- extension:
last-update: Wed, 10 Dec 2025 06:16:56 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-936135
name: cert-expiration-936135
- context:
cluster: kubernetes-upgrade-800617
user: kubernetes-upgrade-800617
name: kubernetes-upgrade-800617
current-context: ""
kind: Config
users:
- name: cert-expiration-936135
user:
client-certificate: /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/cert-expiration-936135/client.crt
client-key: /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/cert-expiration-936135/client.key
- name: kubernetes-upgrade-800617
user:
client-certificate: /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/kubernetes-upgrade-800617/client.crt
client-key: /home/jenkins/minikube-integration/22089-8832/.minikube/profiles/kubernetes-upgrade-800617/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-201263

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-201263"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-201263"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-201263"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-201263"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-201263"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-201263"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-201263"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-201263"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-201263"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-201263"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-201263"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-201263"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-201263"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-201263"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-201263"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-201263"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-201263"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-201263" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-201263"

                                                
                                                
----------------------- debugLogs end: cilium-201263 [took: 3.541164483s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-201263" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-201263
--- SKIP: TestNetworkPlugins/group/cilium (3.71s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-998062" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-998062
--- SKIP: TestStartStop/group/disable-driver-mounts (0.20s)

                                                
                                    
Copied to clipboard